Your car knows too much about you. That could be a privacy nightmare.

The car you drive says more about you than you think. …As Jon Callas, the Electric Frontier Foundation’s director of technology projects, explained to Mashable, newer cars — and Teslas in particular — are in many ways like smartphones that just happen to have wheels. They are often WiFi-enabled, come with over a hundred CPUs, and have Bluetooth embedded throughout. In other words, they’re a far cry from the automobiles of even just 20 years ago. Read More

#surveillance

With user-generated content on the rise, platforms are emerging to support this new type of creator

As the definition of user-generated content (UGC) expands, dedicated platforms are emerging to support this new type of creator. These nascent platforms are more than just places to create and share user-generated content: rather, they combine elements of talent management, venture capital and marketing to help UGC creators turn a profit. Read More

#vfx

The Battle for Digital Privacy Is Reshaping the Internet

As Apple and Google enact privacy changes, businesses are grappling with the fallout, Madison Avenue is fighting back and Facebook has cried foul.

Apple introduced a pop-up window for iPhones in April that asks people for their permission to be tracked by different apps.

Google recently outlined plans to disable a tracking technology in its Chrome web browser.

And Facebook said last month that hundreds of its engineers were working on a new method of showing ads without relying on people’s personal data

The developments may seem like technical tinkering, but they were connected to something bigger: an intensifying battle over the future of the internet. Read More

#surveillance

An American Company Fears Its Windows Hacks Helped India Spy on China and Pakistan

A U.S. company’s tech was abused by the Indian government, amidst warnings Americans are contributing to a spyware industry already under fire for being out of control.

Earlier this year, researchers at Russian cybersecurity firm Kaspersky witnessed a cyberespionage campaign targeting Microsoft Windows PCs at government and telecom entities in China and Pakistan. They began in June 2020 and continued through to April 2021. What piqued the researchers’ interest was the hacking software used by the digital spies, whom Kaspersky had dubbed Bitter APT, a pseudonym for an unspecified government agency. Aspects of the code looked like some the Moscow antivirus provider had previously seen and attributed to a company it gave the cryptonym “Moses.”

Moses, said Kaspersky, was a mysterious provider of hacking tech known as a “zero-day exploit broker.” Such companies operate in a niche market within the $130 billion overall cybersecurity industry, creating software—an “exploit”—that can hack into computers via unpatched vulnerabilities known as “zero days” (the term coming from the fact that developers have “zero days” to fix the problem before it’s publicly known). Read More

#cyber

Leveraging Multi-Cloud Clusters in Real-World Scenarios

Read More

#cloud, #videos

Has AI found a new Foundation?

I

n August, 32 faculty and 117 research scientists, postdocs, and students at Stanford University, long one of the biggest players in AI, declared that there has been a “sweeping paradigm shift in AI”. They coined a new term, “Foundation Models” to characterize the new paradigm, joined forces in a “Center for Research on Foundation Models”, and published the massive 212-page report “On the Opportunities and Risks of Foundation Models.”

Although the term is new, the general approach is not. You train a big neural network (like the well-known GPT-3) on an enormous amount of data, and then you adapt (“fine-tune”) the model to a bunch of more specific tasks (in the words of the report, “a foundation model …[thus] serves as [part of] the common basis from which many task-specific models are built via adaptation”). The basic model thus serves as the “foundation” (hence the term) of AIs that carry out more specific tasks. The approach started to gather momentum in 2018, when Google developed the natural language processing model called BERT, and it became even more popular with the introduction last year of OpenAI’s GPT-3. Read More

#artificial-intelligence, #machine-learning

A way to spot computer-generated faces

A small team of researchers from The State University of New York at Albany, the State University of New York at Buffalo and Keya Medical has found a common flaw in computer-generated faces by which they can be identified. The group has written a paper describing their findings and have uploaded them to the arXiv preprint server.

…The researchers note that in many cases, users can simply zoom in on the eyes of a person they suspect may not be real to spot the pupil irregularities. They also note that it would not be difficult to write software to spot such errors and for social media sites to use it to remove such content. Unfortunately, they also note that now that such irregularities have been identified, the people creating the fake pictures can simply add a feature to ensure the roundness of pupils. Read More

#fake, #gans, #image-recognition

From Motor Control to Team Play in Simulated Humanoid Football

Read More

Read Paper

#videos, #machine-learning

Google launches ‘digital twin’ tool for logistics and manufacturing

Google today announced Supply Chain Twin, a new Google Cloud solution that lets companies build a digital twin — a representation of their physical supply chain — by organizing data to get a more complete view of suppliers, inventories, and events like weather. Arriving alongside Supply Chain Twin is the Supply Chain Pulse module, which can be used with Supply Chain Twin to provide dashboards, analytics, alerts, and collaboration in Google Workspace. Read More

#big7, #iot

Ex-Google exec describes 4 top dangers of artificial intelligence

In a new interview, AI expert Kai-Fu Lee — who worked as an executive at Google (GOOGGOOGL), Apple (AAPL), and Microsoft (MSFT) — explained the top four dangers of burgeoning AI technology: externalities, personal data risks, inability to explain consequential choices, and warfare.

“The single largest danger is autonomous weapons,” he says. Read More

#strategy