OpenAI shuts down election influence operation that used ChatGPT

OpenAI has banned a cluster of ChatGPT accounts linked to an Iranian influence operation that was generating content about the U.S. presidential election, according to a blog post on Friday. The company says the operation created AI-generated articles and social media posts, though it doesn’t seem that it reached much of an audience.

This is not the first time OpenAI has banned accounts linked to state-affiliated actors using ChatGPT maliciously. In May the company disrupted five campaigns using ChatGPT to manipulate public opinion.

These episodes are reminiscent of state actors using social media platforms like Facebook and Twitter to attempt to influence previous election cycles. Now similar groups (or perhaps the same ones) are using generative AI to flood social channels with misinformation. Similar to social media companies, OpenAI seems to be adopting a whack-a-mole approach, banning accounts associated with these efforts as they come up.

OpenAI says its investigation of this cluster of accounts benefited from a Microsoft Threat Intelligence report published last week, which identified the group (which it calls Storm-2035) as part of a broader campaign to influence U.S. elections operating since 2020.

Microsoft said Storm-2035 is an Iranian network with multiple sites imitating news outlets and “actively engaging US voter groups on opposing ends of the political spectrum with polarizing messaging on issues such as the US presidential candidates, LGBTQ rights, and the Israel-Hamas conflict.” The playbook, as it has proven to be in other operations, is not necessarily to promote one policy or another but to sow dissent and conflict.

OpenAI identified five website fronts for Storm-2035, presenting as both progressive and conservative news outlets with convincing domain names like “evenpolitics.com.” The group used ChatGPT to draft several long-form articles, including one alleging that “X censors Trump’s tweets,” which Elon Musk’s platform certainly has not done (if anything, Musk is encouraging former president Donald Trump to engage more on X).

An example of a fake news outlet running ChatGPT-generated content.Image Credits: OpenAI

On social media, OpenAI identified a dozen X accounts and one Instagram account controlled by this operation. The company says ChatGPT was used to rewrite various political comments, which were then posted on these platforms. One of these tweets falsely, and confusingly, alleged that Kamala Harris attributes “increased immigration costs” to climate change, followed by “#DumpKamala.”

OpenAI says it did not see evidence that Storm-2035’s articles were shared widely and noted a majority of its social media posts received few to no likes, shares, or comments. This is often the case with these operations, which are quick and cheap to spin up using AI tools like ChatGPT. Expect to see many more notices like this as the election approaches and partisan bickering online intensifies.

Read More

AI founders play musical chairs

Welcome to Startups Weekly — your weekly recap of everything you can’t miss from the world of startups. Want it in your inbox every Friday? Sign up here.

This week we are looking at AI founders who are playing musical chairs, a massive defense tech investment, and more issues at Techstars. Let’s get into it.

Most interesting startup stories from the week

Image Credits: TechCrunch

Founders and senior executives of the hottest AI startups have been leaping around this week.

OpenAI shuffle: John Schulman, one of the co-founders of OpenAI, left the company for rival AI startup Anthropic, following in the footsteps of Ilya Sutskever, the former OpenAI chief scientist who left the company in May and launched a new startup a month later. In the meantime, OpenAI president and co-founder Greg Brockman announced this week that he decided not to find a new “chair,” opting to take an extended leave to “relax and recharge” from his duties at the AI giant. Read more

Character development: Founders of Character.AI also did some seat switching this week. The a16z chatbot startup’s CEO, Noam Shazeer, has returned to Google to join the tech giant’s DeepMind team. Character.AI co-founder Daniel De Freitas is also joining Google with some other employees. In a deal that resembles a pseudo-acquisition similar to the one Microsoft struck with Inflection in March, Google agreed to use Character.AI’s technology on a non-exclusive basis. Read more

Mega ammo: Defense tech startup Anduril has armed itself with $1.5 million in fresh capital at a massive $14 billion valuation. The 7-year-old company, building autonomous military systems, has its sights set on becoming a top-tier defense contractor, rivaling the likes of Lockheed Martin and General Dynamics. The deal was co-led by returning backers Founders Fund and Sands Capital and was joined by new participants, including Fidelity Management & Research Company and Baillie Gifford. Read more

GrubMarket cracks Good Eggs: B2B produce and logistics company GrubMarket, which is now valued at $3.5 billion, has acquired 13-year-old Good Eggs, a fresh food delivery startup that was valued at $365 million in 2020. But the high-end grocery delivery startup that was backed by top investors, including Benchmark, Sequoia and Thrive, was marked down by 94% to $22 million after COVID-19 tailwinds faded. Read more

Most interesting fundraises this week

Image Credits: Jackie Niam (opens in a new window) / Getty Images

Chipping away: Groq, a startup that is manufacturing chips for running AI models, has raised $640 million at a $2.8 billion valuation from investors, including BlackRock, Neuberger Berman and Cisco. The 8-year-old company competes with Nvidia, which is estimated to control 70% to 95% of the AI chip manufacturing market, as well as Amazon, Google and Microsoft, all of which are working on developing their own AI chips. Read more  

Location, location, a bump in valuation: The 13-year-old location analytics startup Placer.ai shows that knowing where the customers are is still very valuable for retailers, banks and healthcare companies. Placer quietly secured an additional $75 million in funding, valuing the company at $1.45 billion, a 50% increase from its previous valuation of $1 billion just 18 months ago. Read more

why?! not: Maya Watson and Lexi Nisita, who met while working at Netflix and later moved to Clubhouse, where they were employees 13 and 20, decided that the world needs another social networking startup to help people combat loneliness. The duo started a new app called why?! that’s part conversation app, part networking app and part dating app. why?! raised $1.65 million in a pre-seed round, led by Charles Hudson, managing partner and founder of Precursor Ventures. Read more

AI for little ones: Tired of your child begging you to watch another cartoon? Perhaps you can interest them in playing with an AI bot instead. Heeyo, a startup that offers children between the ages of 3 and 11 an AI chatbot that offers over 2,000 interactive games and activities, has just raised a $3.5 million seed round from OpenAI Startup Fund, Alexa Fund, Pear VC and other investors. TechCrunch reporter Rebecca Bellan tried out Heeyo and found it to be perfectly safe for kids. Read more

Most interesting VC and fund news this week

Image Credits: Bryce Durbin/TechCrunch

Falling stars: Techstars has been shining less brightly recently. The global startup accelerator laid off 17% of its workforce this week. Techstars will also wind down its $80 million Advancing Cities program by the end of the year. Launched in 2022 with J.P. Morgan’s backing, the program aimed to support diverse founders in cities like Oakland, New York, Miami, and Washington, D.C., through accelerator programs. Read more

Flint Capital sparks funding: Boston-based Flint Capital has raised a $160 million third fund. The 11-year-old firm’s limited partners include primarily tech entrepreneurs, including successful founders that Flint backed previously. Flint’s investments include identity verification startup Socure, last valued at $4.5 billion, and Flo Health, which was recently valued at $1 billion. Read more

Last but not least

Image Credits: Bryce Durbin

Funding may still be hard to secure, especially for late-stage startups, but it doesn’t mean investors aren’t minting new unicorns. In fact, we counted 38 new ones that were created this year. Find out who grew a horn in 2024.

Read More

This Week in AI: OpenAI’s talent retention woes

Hiya, folks, welcome to TechCrunch’s regular AI newsletter.

This week in AI, OpenAI lost another co-founder.

John Schulman, who played a pivotal role in the development of ChatGPT, OpenAI’s AI-powered chatbot platform, has left the company for rival Anthropic. Schulman announced the news on X, saying that his decision stemmed from a desire to deepen his focus on AI alignment — the science of ensuring AI behaves as intended — and engage in more hands-on technical work.

But one can’t help but wonder if the timing of Schulman’s departure, which comes as OpenAI president Greg Brockman takes an extended leave through the end of the year, was opportunistic.

Earlier the same day Schulman announced his exit, OpenAI revealed that it plans to switch up the format of its DevDay event this year, opting for a series of on-the-road developer engagement sessions instead of a splashy one-day conference. A spokesperson told TechCrunch that OpenAI wouldn’t announce a new model during DevDay, suggesting that work on a successor to the company’s current flagship, GPT-4o, is progressing at a slow pace. (The delay of Nvidia’s Blackwell GPUs could slow the pace further.)

Could OpenAI be in trouble? Did Schulman see the writing on the wall? Well, the outlook at Sam Altman’s empire is undoubtedly gloomier than it was a year ago.

Ed Zitron, PR pro and all-around tech pundit, outlined in his newsletter recently the many obstacles stand in the way of OpenAI’s path to continued success. It’s a well-researched and thorough piece, and I won’t do it an injustice by retreading the thing. But the points Zitron makes about OpenAI’s increasing pressure to perform are worth spotlighting.

OpenAI is reportedly on track to lose $5 billion this year. To cover the rising costs of headcount (AI researchers are very, very expensive), model training and model serving at scale, the company will have to raise an enormous tranche of cash within the next 12 to 24 months. Microsoft would be the obvious benefactor; it has a 49% stake in OpenAI and, despite their sometime rivalry, a close working relationship with OpenAI’s product teams. But with Microsoft’s capital expenditures growing 75% year-over-year (to $19 billion) in anticipation of AI returns that have yet to materialize, does it really have the appetite to pour untold billions more into a long-term, risky bet?

This reporter would be surprised if OpenAI, the most prominent AI company in the world, failed to source the money that it needs from somewhere in the end. There’s a very real possibility this lifeline will come with less favorable terms, however — and perhaps the long-rumored alteration of the company’s capped-profit structure.

Surviving will likely mean OpenAI moves further away from its original mission and into uncharted and uncertain territory. And perhaps that was too tough a pill for Schulman (and co.) to swallow. It’s hard to blame them; with investor and enterprise skepticism ramping up, the entire AI industry, not just OpenAI, faces a reckoning.

News

Apple Intelligence has its limits: Apple gave users the first real taste of its Apple Intelligence features with the release of the iOS 18.1 developer beta last month. But as Ivan writes, the Writing Tools feature stumbles when it comes to swearing and touchy topics, like drugs and murder.

Google’s Nest Learning Thermostat gets a makeover: After nine long years, Google is finally refreshing the device that gave Nest its name. The company on Tuesday announced the launch of the Nest Learning Thermostat 4 — 13 years after the release of the original and nearly a decade after the Learning Thermostat 3 and ahead of the Made by Google 2024 event next week.

X’s chatbot spread election misinfo: Grok has been spreading false information about Vice President Kamala Harris on X, the social network formerly known as Twitter. That’s according to an open letter penned by five secretaries of state and addressed to Tesla, SpaceX and X CEO Elon Musk, which claims that X’s AI-powered chatbot wrongly suggested Harris isn’t eligible to appear on some 2024 U.S. presidential ballots.

YouTuber sues OpenAI: A YouTube creator is seeking to bring a class action lawsuit against OpenAI, alleging that the company trained its generative AI models on millions of transcripts from YouTube videos without notifying or compensating the videos’ owners.

AI lobbying ramps up: AI lobbying at the U.S. federal level is intensifying in the midst of a continued generative AI boom and an election year that could influence future AI regulation. The number of groups lobbying the federal government on issues related to AI grew from 459 in 2023 to 556 in the first half of 2024, from January to July.

Research paper of the week

“Open” models like Meta’s Llama family, which can be used more or less however developers choose, can spur innovation — but they also present risks. Sure, many have licenses that impose restrictions, as well as built-in safety filters and tooling. But beyond those, there’s not much to prevent a bad actor from using open models to spread misinformation, for example, or spin up a content farm.

There may be in the future.

A team of researchers hailing from Harvard, the nonprofit Center for AI Safety, and elsewhere propose in a technical paper a “tamper-resistant” method of preserving a model’s “benign capabilities” while preventing the model from acting undesirably. In experiments, they found their method to be effective in preventing “attacks” on models (like tricking it into providing info it shouldn’t) at the slight cost of a model’s accuracy.

There is a catch. The method doesn’t scale well to larger models due to “computational challenges” that require “optimization” to reduce the overhead, the researchers explain in the paper. So, while the early work is promising, don’t expect to see it deployed anytime soon.

Model of the week

A new image-generating model emerged on the scene recently, and it appears to give incumbents like Midjourney and OpenAI’s DALL-E 3 a run for their money.

Called Flux.1, the model — or rather, family of models — was developed by Black Forest Labs, a startup founded by ex-Stability AI researchers, many of whom were involved with the creation of Stable Diffusion and its many follow-ups. (Black Forest Labs announced its first funding round last week: a $31 million seed led by Andreessen Horowitz.)

The most sophisticated Flux.1 model, Flux.1 Pro, is gated behind an API. But Black Forest Labs released two smaller models, Flux.1 Dev and Flux.1 Schnell (German for “fast”), on the AI dev platform Hugging Face with light restrictions on commercial usage. Both are competitive with Midjourney and DALL-E 3 in terms of the quality of images they can generate and how well they’re able to follow prompts, claims Black Forest Labs. And they’re especially good at inserting text into images, a skill that’s eluded image-generating models historically.

Black Forest Labs has opted not to share what data it used to train the models (which is some cause for concern given the copyright risks inherent in this sort of AI image generation), and the startup hasn’t gone into great detail as to how it intends to prevent misuse of Flux.1. It’s taking a decidedly hands-off approach for now — so user beware.

Grab bag

Generative AI companies are increasingly embracing the fair use defense when it comes to training models on copyrighted data without the blessing of that data’s owners. Take Suno, the AI music-generating platform, for example, which recently argued in court that it has permission to use songs belonging to artists and labels without those artists’ and labels’ knowledge — and without compensating them.

This is Nvidia’s (perhaps wishful) thinking, too, reportedly. According to a 404 Media report out this week, Nvidia is training a massive video-generating model, code-named Cosmos, on YouTube and Netflix content. High-level management greenlit the project, which they believe will survive courtroom battles thanks to the current interpretation of U.S. copyright law.

So, will fair use save the Sunos, Nvidias, OpenAIs and Midjourneys of the world from legal hellfire? TBD — and the lawsuits will take ages to play out, assuredly. It could well turn out that the generative AI bubble bursts before a precedent is established. If that doesn’t end up being the case, either creators — from artists to musicians to writers to lyricists to videographers — can expect a big payday or they’ll be forced to live with the uncomfortable fact that anything they make public is fair game for a generative AI company’s training.

Read More

Zuckerberg and Jensen show off their friendship, while an AI necklace covets yours

A fireside chat between Jensen Huang and Mark Zuckerberg at SIGGRAPH 2024 took some unexpected turns. What started as a conversation about the capabilities of Nvidia GPUs and Zuckerberg’s vision of an AI chatbot future quickly became a more casual affair — including a swap of custom-made jackets, a rare F-bomb from the Meta CEO, and a slightly unsettling anecdote about slicing tomatoes.

Bumble, Hinge and other apps were open to stalkers, with vulnerabilities that allowed users to be tracked within 2 meters of their physical location. It took researchers a bit of work to identify the issue, which has since been resolved, but it’s another reminder of how privacy is always one vulnerability away from being violated.

Intel announced sweeping layoffs, affecting 15,000 employees, as the company continues to face declining revenue, a lack of success in its AI initiatives, and a prediction that the rest of the year will be “tougher than previously expected,” in the words of its CEO Pat Gelsinger.

The SEC charged BitClout founder Nader Al-Naji with fraud and unregistered offering of securities, claiming he used a pseudonymous identity to avoid regulatory scrutiny while he raised over $257 million in cryptocurrency. BitClout, a decentralized social media platform, raised from a who’s who of firms, like a16z, Sequoia, Social Capital, Coinbase Ventures and Winklevoss Capital.

Meta reached a $1.4 billion settlement with Texas attorney general Ken Paxton this week. The settlement stems from a two-year-old lawsuit alleging that Meta’s past use of facial-recognition technology violated the state’s privacy protections and that Facebook failed to disclose this practice to users and obtain their consent. The first payment of $500 million is due in the next month, according to court filings.

This is TechCrunch’s Week in Review — where we recap the week’s biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here.

News

Image Credits: Friend

You can now try out Apple Intelligence: Apple is finally releasing some of its highly anticipated Apple Intelligence features as part of a developer beta version of iOS 18.1. Here’s how you can enable it on your iPhone. Read more

OpenAI starts rolling out Voice Mode: Following controversies and delays, OpenAI is giving a small group of ChatGPT Plus users access to GPT-4o’s advanced Voice Mode. The company says the feature will roll out to all Plus users in fall 2024. Read more

This necklace wants to be your friend: Friend is a wearable AI device designed to combat loneliness. Rather than focusing on productivity, the AI necklace acts like an always-listening walkie-talkie that you can chat with. Read more

Meta launches AI Studio: Creators in the U.S. will now be able to build AI bots across all Meta platforms. The bots can be used to create captions, format posts, generate memes, and even make a personal chatbot to interact with their followers. Read more

Here’s how to opt out of facial recognition at airports: U.S. airports are rolling out facial-recognition technology to scan the faces of travelers before they board their flight. But Americans can opt out of it altogether. Read more

Turns out, doomscrolling probably isn’t good for you: A recent study published in Computers in Human Behavior Reports links the process of doomscrolling to existential anxiety, despair, distrust and suspicion of others. Read more

Canva acquires Leonardo.ai: In an effort to broaden the scope of its AI tech stack, Canva has acquired generative AI content and research startup Leonardo.ai. As a result, all 120 of the startup’s employees will join Canva. Read more

Flo Health becomes a unicorn: The fertility-focused period-tracking app raised a $200 million Series C, valuing the startup at more than $1 billion post-money. The funding will be used to attract more users and add features for menopause and perimenopause. Read more

OpenAI and Microsoft’s frenemy era begins? Microsoft has invested significantly in OpenAI and uses its models across many products. And while OpenAI being listed as a “competitor” in an SEC filing may raise eyebrows, there’s some nuance involved. Read more 

Welcome back, Motorola Razr flip phones: Samsung is still the king of foldable smartphones, but there’s more competition in the category. We compare the new Galaxy Fold 6 to the Motorola Razr+ (which, yes, comes in that iconic pink shade). Read more

Analysis

Image Credits: Gabby Jones/Bloomberg / Getty Images

Why did Wiz walk away from $23 billion? Google was reportedly offering $23 billion to acquire Wiz. Then Wiz walked away. Why? Ron Miller argues that by saying no to what could have been the most lucrative deal ever proposed for a startup, Wiz showed it has a lot of nerve — and that it’s willing to place a big bet on itself. Read more

Can you actually make an AI companion for children? The ambitious startup Heeyo wants to build an AI that is both a friend and tutor for kids. But with a target audience like that, privacy and safety are of utmost concern, and Rebecca Bellan put both to the test in her exclusive exploration of their chatbot. Read more

Read More