Why boomers ‘like’ AI pics on Facebook, mind-reading AI breakthrough: AI Eye 

Not even your mind is safe from AI

AI mind-reading technology has taken a giant leap forward thanks to MindEye2 from Stability AI and Princeton.

Previous mind visualization AI models have been able to create somewhat accurate pictures of what people are thinking about but require a lot of expensive training on individuals using functional magnetic resonance imaging (fMRI).

The new model is much more general and requires as little as one hour of training on an individual to attain “state-of-the-art image retrieval and… demonstrates how accurate reconstructions of perception are possible from a single visit to the MRI facility.”

MindEye2 gets scary good after 40 hours.

It still takes about 40 hours of training to get properly accurate images; however, the steep trajectory of the tech’s development suggests it will become easier and easier.

While amazing from a technical perspective, the technology is clearly intrusive and has worrying implications for privacy. It’s the sort of tech authoritarian governments could embrace to make the concept of a thought crime a reality.

Stop boomers liking AI pics

There’s a meme going around that Facebook has been hijacked by endless AI pictures of cute kids, amazing houses, and too-good-to-be-true art projects that boomers are applauding without realizing they were all generated by AI.

A new pre-print, examining 120 Facebook pages that pump out AI generated posts that lead users to spam and scams, suggests the meme is true.

Back in 2022, Facebook changed its algorithm to start spamming users with content from pages they don’t follow, and it now accounts for a quarter of your newsfeed, up from 8% in 2021. As the algo only really cares about engagement, AI-created content that can elicit a reaction now receives hundreds of millions of views. One post with an AI generated image was seen by 40 million people and was in the top 20 most popular posts worldwide in Q3 last year.

Typically the spammers and scammers either buy or hijack an existing page before pumping out AI-generated content. The researchers found 43 pages posting AI images of log cabins, 25 pages posting AI images of cute kids, 17 posting wood carvings and 10 focused on AI Jesus.

Spot the difference with these AI pics.

Recurring themes included cute kids proudly displaying a cake or item they’d made with text saying, “This is my first cake! Will be glad for your marks” or “My daughter is 9, she is taking part in a school competition. Let’s encourage her.”

Would a boomer in your life like these pics?

The researchers wrote: “We observed that Facebook users would often comment on the pictures in ways suggesting they did not recognize the images were fake — congratulating, for example, an AI-generated child for an AI-generated painting.”

Another frequent line was “no one ever blessed me” alongside AI pics of old people, amputees, and infants, while the phrase “Made it with my own hands” was ironically plastered over AI-generated pics of unfeasibly good woodwork, ice sculptures and sand castles.

In a bizarre twist, a crab version of Jesus being worshipped by other crabs was also tagged with the line “Made it with my own hands!” and received 209,000 engagements and more than 4,000 comments.

Facebook is clearly aware of the problem and has announced plans to watermark AI-generated content created using its own gen AI features. It will also implement the C2PA standard to label images from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock once those firms “implement their plans for adding metadata to images created by their tools.”

There are some telltale signs this isn’t a real picture of Jesus.

Live long and prosper with the crypto and AI app Rejuve.ai

AI Eye caught up with Deborah Duong, chief technology officer of crypto and AI longevity app Rejuve.ai recently, and was surprised to notice her wearing three smartwatches at once.

It turned out she’s been recording her own health data to see how reliable each of the watches — a Garmin, Fitbit and Apple Watch — is at accurately measuring things like heart rate and blood oxygen levels, so the data is comparable when it’s fed into the app and analyzed by AI.

“I’ve been doing it for two years. My daughter thinks I’m crazy!” laughed Duong.

The idea is to crowdsource health data by rewarding users with a token that can be exchanged for discounts on longevity treatments. The AI analyzes your data (including blood or genomic tests you upload) and then provides recommendations based on the results of a database of 300 meta-analyses of randomized control trials.

“We have a way to put all of those meta-analyses together into a coherent picture to calculate your risk of certain conditions related to longevity,” explained Duong.

Rejuve.ai chief technology officer Deborah Duong and CEO Jasmine Smith (Fenton Creative)

AIs are writing scientific papers peer-reviewed by other AIs

More and more scientific papers are slipping through the peer review process despite signs they were written by ChatGPT.

French professor Guillaume Cabanac posted an example recently of a paper on lithium metal batteries published by scientific journal publisher Elsevier. The very first sentence began with: “Certainly, here is a,” which is a favored phrase of ChatGPT.

“How come none of the coauthors, editor-in-chief, reviewers, typesetters noticed? How can this happen with regular peer review?” he asked. Elsevier said it was investigating, but its policies allowed the use of LLMs as long as it is declared. It wasn’t declared, according to Cabanac.

There are dozens of other examples of scientific papers on Google Scholar that contain the phrase “Certainly, here is a.”

Certainly, here is a science paper that’s probably written by ChatGPT (X)

Similar examples from Elsevier soon followed, including a photovoltaic research paper that included the tip-off phrase “regenerate response” and a medical article about the Iatrogenic portal vein that said, “I’m very sorry but… I am an AI language model.”

Adding insult to injury, a study of peer reviews of scientific papers found that between 6.5% and 16.0% of the reviews had themselves been substantially written by AIs. The estimate is based on the frequency of words like “commendable,” “meticulous,” and “intricate,” which appear up to 30 times more often in LLM-generated text.

Robots get brainz

Some experts believe that one path to a well-rounded AGI is to put AIs into a physical form and have them interact with the physical world.

At its GPU Technology Conference event this week, NVIDIA released Project GR00T (Generalist Robot 00 Technology), which is an attempt to build a mind for humanoid robots. The technology aims to enable robots to reason, understand natural language, learn skills and emulate human movements from observation. It uses the Thor system on a chip and upgraded Isaac’s robotics platform.

Jensen Huang, founder of NVIDIA, called building a general-purpose foundational model for robots “one of the most exciting foundational problems to solve in AI today.”

A flashy video at the NVIDIA presentation shows researchers training countless instances of the robot in a simulated environment. It also shows robots apparently learning from a handful of human demonstrations to learn how to use a juicer, take a tray out of the oven or play the drums.

I am Gr00T and I play the drums. (Nvidia)

Some demonstrations were labeled “teleoperated,” while others saw a virtual robot called an “Omiverse Digital Twin” mirroring actions from a human before the real-world robot did likewise.

“We are at an inflection point in history, with human-centric robots like Digit poised to change labor forever. Modern AI will accelerate development, paving the way for robots like Digit to help people in all aspects of daily life,” said Jonathan Hurst.

Near founder Illia Polosukhin also appeared at the conference, chatting with Huang about his role in the seminal “attention is all you need” transformer paper that led to the modern LLMs.

I was honored to speak at the @nvidia GTC event today with Jensen Huang and all the "Attention Is All You Need" coauthors! I got to share some of we've built at @NEARProtocol.

Session videos are available in case you missed it: https://t.co/z934h4oOFT pic.twitter.com/aurpmZKSZZ — Illia (root.near) (🇺🇦, ⋈) (@ilblackdragon) March 20, 2024

A separate demonstration of the Figure 1 humanoid robot, which uses OpenAI technology, was seen by 10 million people. It shows a robot who sounds suspiciously like Rob Lowe, putting dishes in a drying rack or handing a human an apple.

AI lead Corey Lynch said the robot can plan future actions, reflect on things that have happened, and explain its reasoning verbally.

“Even just a few years ago, I would have thought having a full conversation with a humanoid robot while it plans and carries out its own fully learned behaviors would be something we would have to wait decades to see. Obviously, a lot has changed :).”

All Killer, No Filler AI News

OpenAI boss Sam Altman says that GPT-4 “kind of sucks” compared to what comes next. That won’t necessarily be GPT-5, though. “We will release an amazing model this year. I don’t know what we’ll call it,” he said.

Apple has published a new preprint paper on the MM1 family of multimodal AI models that are able to understand both text and images and are up to 30B parameters in size.

India has dropped plans to force AI model creators to seek approval from the government following a backlash.

— After Elon Musk open-sourced the Grok model, AI doomer Tolga Bilge called the entire concept of open-sourcing AI models a “total scam” as the code and weights released do not include training data or give users insight into the inner workings of the model. You do “not have the ability to reproduce the program, you just have the program!” he said.

"Open source AI" is a total scam:

With open source software one releases the necessary information (source code) in order to reproduce the program. This also allows one to inspect and modify the software.

"Open source" AI is more akin to simply releasing a compiled binary.… pic.twitter.com/GTcd2OLh4p — Tolga Bilge (@TolgaBilge_) March 17, 2024

— Blogger Noah Smith argues there will still be plenty of good, high-paying jobs for humans left even after AIs can do them. His argument rests on the economic concept of “comparative advantage” — that there will still be lots of things it makes economic sense to get humans to do. AIs will be constrained by energy and the amount of compute available, he says so that they will be prioritized for more valuable things.

— The AI + crypto sector’s market cap has surged by 150% to $25.1 billion in less than a month, led by Internet Computer (ICP), Bittensor (TAO), The Graph (GRT), Fetch (FET), SingularityNet (AGIX) and Worldcoin (WLD).

Source