Freepik releases an ‘open’ AI image generator trained on licensed data

Freepik, the online graphic design platform, unveiled a new “open” AI image model on Tuesday that the company says was trained exclusively on commercially licensed, “safe-for-work” images.

The model, called F Lite, contains around 10 billion parameters — parameters being the internal components that make up the model. F Lite was developed in partnership with AI startup Fal.ai and trained using 64 Nvidia H100 GPUs over the course of two months, according to Freepik.

F Lite joins a small and growing collection of generative AI models trained on licensed data.

We’ve been secretly working on this for months! It feels good to finally share it!

LINKS:

• Regular version: more predictable and prompt-faithful, but less artistic: https://t.co/MyWsKer9Ir

• Texture version: is more chaotic and error-prone, but delivers better textures and… pic.twitter.com/GX5mIpYE8O

— Javi Lopez ⛩️ (@javilopen) April 29, 2025

Generative AI is at the center of copyright lawsuits against AI companies, including OpenAI and Midjourney. It’s frequently developed using massive amounts of content — including copyrighted content — from public sources around the web. Most companies developing these models argue fair use shields their practice of using copyrighted data for training without compensating the owners. Many creators and IP rights holders disagree.

Freepik has made two flavors of F Lite available, standard and texture, both of which were trained on an internal data set of around 80 million images. Standard is more predictable and “prompt-faithful,” while texture is more “error-prone” but delivers better textures and creative compositions, according to the company.

Here’s an image from the standard model generated with the prompt “A person standing in front of a sunset, in majestic surroundings.”

An AI-generated photo from Freepik’s F Lite model.Image Credits:Freepik

Freepik makes no claim that F Lite produces images superior to leading image generators like Midjourney’s V7, Black Forest Labs’ Flux family, or others. The goal was to make a model openly available so that developers could tailor and improve it, according to the company.

That being said, running F Lite is no easy feat. The model requires a GPU with at least 24GB of VRAM.

Other companies developing media-generating models on licensed data include Adobe, Bria, Getty Images, Moonvalley, and Shutterstock. Depending on how AI copyright lawsuits shake out, the market could grow exponentially.

Read More

Meta says its Llama AI models have been downloaded 1.2B times

In mid-March, Meta said that its “open” AI model family, Llama, hit 1 billion downloads, up from 650 million downloads as of early December 2024. On Tuesday at its inaugural LlamaCon developer conference, Meta revealed that figure has reached 1.2 billion downloads.

“We have thousands of developers contributing tens of thousands of derivative models being downloaded hundreds of thousands of times a month,” said Meta Chief Product Officer Chris Cox onstage during a keynote.

Meanwhile, Meta AI, Meta’s AI assistant powered by Llama models, has reached around a billion users, Cox added.

Meta’s Llama ecosystem is growing at a fast clip indeed, but the tech giant faces competition from a number of formidable players in the AI space. Just on Monday, Alibaba released Qwen 3, a family of models that’s highly competitive on a number of AI benchmarks.

Read More

Meta previews an API for its Llama AI models

At its inaugural LlamaCon AI developer conference on Tuesday, Meta announced an API for its Llama series of AI models: the Llama API.

Available in limited preview, the Llama API lets developers explore and experiment with products powered by different Llama models, per Meta. Paired with Meta’s SDKs, it allows developers to build Llama-driven services, tools, and applications. Meta didn’t immediately share the API’s pricing with TechCrunch.

The rollout of the API comes as Meta looks to maintain a lead in the fiercely competitive open model space. While Llama models have racked up more than a billion downloads to date, according to Meta, rivals such as DeepSeek and Alibaba’s Qwen threaten to upend Meta’s efforts to establish a far-reaching ecosystem with Llama.

The Llama API offers tools to fine-tune and evaluate the performance of Llama models, starting with Llama 3.3 8B. Customers can generate data, train on it, and then use Meta’s evaluation suite in the Llama API to test the quality of their custom model.

Image Credits:Meta

Meta said it won’t use Llama API customer data to train the company’s own models and that models built using the Llama API can be transferred to another host.

For devs building on top of Meta’s recently released Llama 4 models specifically, the Llama API offers model-serving options via partnerships with Cerebras and Groq. These “early experimental” options are “available by request” to help developers prototype their AI apps, Meta said.

“By simply selecting the Cerebras or Groq model names in the API, developers can … enjoy a streamlined experience with all usage tracked in one location,” wrote Meta in a blog post provided to TechCrunch. “[W]e look forward to expanding partnerships with additional providers to bring even more options to build on top of Llama.”

Meta said it will expand access to the Llama API “in the coming weeks and months.”

Read More

Google’s NotebookLM expands its AI podcast feature to more languages

Google’s AI-based note-taking and research assistant NotebookLM is making its Audio Overviews feature available in 76 new languages, the company announced on Tuesday. Audio Overviews launched last year to give users the ability to generate a podcast with AI virtual hosts based on documents they have shared with NotebookLM, such as course readings or legal briefs.

The idea behind the feature is to give users another way to digest and comprehend the information in the documents they have uploaded to the app. With this expansion, more people can use Audio Overviews in their preferred language.

Google notes that up until now, Audio Overviews have been generated in your account’s preferred language. Now the company is introducing a new “Output Language” option that will allow users to choose which language their Audio Overviews are generated in.

You can change the language at any time, making it easy to create multilingual content or study materials as needed, Google says.

“For example, a teacher preparing a lesson on the Amazon rainforest can share resources in various languages — like a Portuguese documentary, a Spanish research paper, and English study reports — with their students,” Google wrote in a blog post. “The students can upload these and can generate an Audio Overview of key insights in their preferred language.”

Google told TechCrunch in an email that the new supported languages include Afrikaans, Arabic, Azerbaijani, Bulgarian, Bengali, Catalan, Czech, Danish, German, Greek, Spanish (European, Latin American, Mexico), Estonian, Basque, Persian, Finnish, Filipino, French (European), French (Canada), Galician, Gujarati, Hindi, Croatian, Haitian Creole, Hungarian, Armenian, Indonesian, Icelandic, Italian, Hebrew, and Japanese.

They also include Javanese, Georgian, Kannada, Korean, Konkani, Latin, Lithuanian, Latvian, Maithili, Macedonian, Malayalam, Marathi, Malay, Burmese (Myanmar), Nepali, Dutch, Norwegian (Nynorsk), Norwegian (Bokmål), Oriya, Punjabi, Polish, Pashto, Portuguese (Brazil, Portugal), Romanian, Russian, Sindhi, Sinhala, Slovak, Slovenian, Albanian, Serbian (Cyrillic), Swedish, Swahili, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Vietnamese, Chinese (Simplified), and Chinese (Traditional).

Read More

Google launches AI tools for practicing languages through personalized lessons

Google on Tuesday is releasing three new AI experiments aimed at helping people learn to speak a new language in a more personalized way. While the experiments are still in the early stages, it’s possible that the company is looking to take on Duolingo with the help of Gemini, Google’s multimodal large language model.

The first experiment helps you quickly learn specific phrases you need in the moment, while the second experiment helps you sound less formal and more like a local.

The third experiment allows you to use your camera to learn new words based on your surroundings.

Image Credits:Google

Google notes that one of the most frustrating parts of learning a new language is when you find yourself in a situation where you need a specific phrase that you haven’t learned yet.

With the new “Tiny Lesson” experiment, you can describe a situation, such as “finding a lost passport,” to receive vocabulary and grammar tips tailored to the context. You can also get suggestions for responses like “I don’t know where I lost it” or “I want to report it to the police.”

The next experiment, “Slang Hang,” wants to help people sound less like a textbook when speaking a new language. Google says that when you learn a new language, you often learn to speak formally, which is why it’s experimenting with a way to teach people to speak more colloquially, and with local slang.

Image Credits:Google

With this feature, you can generate a realistic conversation between native speakers and see how the dialogue unfolds one message a time. For example, you can learn through a conversation where a street vendor is chatting with a customer, or a situation where two long-lost friends reunite on the subway. You can hover over terms you’re not familiar with to learn about what they mean and how they’re used.

Google says that the experiment occasionally misuses certain slang and sometimes makes up words, so users need to cross-reference them with reliable sources.

Image Credits:Google

The third experiment, “Word Cam,” lets you snap a photo of your surroundings, after which Gemini will detect objects and label them in the language you’re learning. The feature also gives you additional words that you can use to describe the objects.

Google says that sometimes you just need words for the things in front of you, because it can show you how much you just don’t know yet. For instance, you may know the word for “window,” but you might not know the word for “blinds.”

The company notes that the idea behind these experiments is to see how AI can be used to make independent learning more dynamic and personalized.

The new experiments support the following languages: Arabic, Chinese (China, Hong Kong, Taiwan), English (Australia, U.K., U.S.), French (Canada, France), German, Greek, Hebrew, Hindi, Italian, Japanese, Korean, Portuguese (Brazil, Portugal), Russian, Spanish (Latin America, Spain), and Turkish. The tools can be accessed via Google Labs.

Read More

Meta launches a stand-alone AI app to compete with ChatGPT

After integrating Meta AI into WhatsApp, Instagram, Facebook, and Messenger, Meta is rolling out a stand-alone AI app. Unveiled at Meta’s LlamaCon event on Tuesday, this app allows users to access Meta AI in an app, similar to the ChatGPT app and other AI assistant apps.

To win over users, Meta is trying to leverage what makes it different from companies like OpenAI and Anthropic — Meta already has a sense of who you are, what you like, and who you hang out with based on years of data that you’ve likely shared on Facebook or Instagram.

Meta’s AI app can differentiate itself from existing AI assistants because it can “[draw] on information you’ve already chosen to share on Meta products,” the company said, such as your profile and the content you engage with. So far, these personalized responses will be available in the U.S. and Canada.

You can also give Meta more information about you to remember for future conversations with its AI — for example, you can tell the AI that you are lactose intolerant, which it could remember before recommending that you go to a wine and cheese tasting on your next vacation.

As with any AI product, users should be aware of how Meta may use the data they share with its chatbots. Meta relies on its wealth of user data to power its targeted advertising business, which makes up the bulk of its revenue.

Image Credits:Meta

Meta’s AI app also introduces a Discover feed, where you can share how you’re using AI with your friends — in a mock-up image, Meta shows someone asking the AI to describe them in three emojis, which they then shared with their friends. A user’s interactions with Meta AI will only be shared to the feed if they choose to do so.

This feed might amplify certain generative AI trends, like the recent trend in which people tried to make themselves look like Barbie dolls or Studio Ghibli characters. But then again, not every app needs to have a social feed — we’re looking at you, Venmo.

Read More

Meta needs to win over AI developers at its first LlamaCon

On Tuesday, Meta is hosting its first-ever LlamaCon AI developer conference at its Menlo Park headquarters, where the company will try to pitch developers on building applications with its open Llama AI models. Just a year ago, that wasn’t a hard sell.

However, in recent months, Meta has struggled to keep up with both “open” AI labs like DeepSeek and closed commercial competitors such as OpenAI in the rapidly evolving AI race. LlamaCon comes at a critical moment for Meta in its quest to build a sprawling Llama ecosystem.

Winning developers over may be as simple as shipping better open models. But that may be tougher to achieve than it sounds.

A promising early start

Meta’s launch of Llama 4 earlier this month underwhelmed developers, with a number of benchmark scores coming in below models like DeepSeek’s R1 and V3. It was a far cry from what Llama once was: a boundary-pushing model lineup.

When Meta launched its Llama 3.1 405B model last summer, CEO Mark Zuckerberg touted it as a big win. In a blog post, Meta called Llama 3.1 405B the “most capable openly available foundation model,” with performance rivaling OpenAI’s best model at the time, GPT-4o.

It was an impressive model, to be sure — and so were the other models in Meta’s Llama 3 family. Jeremy Nixon, who has hosted hackathons at San Francisco’s AGI House for the last several years, called the Llama 3 launches “historic moments.”

Llama 3 arguably made Meta a darling among AI developers, delivering cutting-edge performance with the freedom to host the models wherever they chose. Today, Meta’s Llama 3.3 model is downloaded more often than Llama 4, said Hugging Face’s head of product and growth, Jeff Boudier, in an interview.

Contrast that with the reception to Meta’s Llama 4 family, and the difference is stark. But Llama 4 was controversial from the start.

Benchmarking shenanigans

Meta optimized a version of one of its Llama 4 models, Llama 4 Maverick, for “conversationality,” which helped it nab a top spot on the crowdsourced benchmark LM Arena. Meta never released this model, however — the version of Maverick that rolled out broadly ended up performing much worse on LM Arena.

The group behind LM Arena said that Meta should have been “clearer” about the discrepancy. Ion Stoica, an LM Arena co-founder and UC Berkeley professor who has also co-founded companies, including Anyscale and Databricks, told TechCrunch that the incident harmed the developer community’s trust in Meta.

“[Meta] should have been more explicit that the Maverick model that was on [LM Arena] was different from the model that was released,” Stoica told TechCrunch in an interview. “When this happens, it’s a little bit of a loss of trust with the community. Of course, they can recover that by releasing better models.”

No reasoning

A glaring omission from the Llama 4 family was an AI reasoning model. Reasoning models can work carefully through questions before answering them. In the last year, much of the AI industry has released reasoning models, which tend to perform better on specific benchmarks.

Meta’s teasing a Llama 4 reasoning model, but the company hasn’t indicated when to expect it.

Nathan Lambert, a researcher with Ai2, says the fact that Meta didn’t release a reasoning model with Llama 4 suggests the company may have rushed the launch.

“Everyone’s releasing a reasoning model, and it makes their models look so good,” Lambert said. “Why couldn’t [Meta] wait to do that? I don’t have the answer to that question. It seems like normal company weirdness.”

Lambert noted that rival open models are closer to the frontier than ever before and that they now come in more shapes and sizes — greatly increasing the pressure on Meta. For example, on Monday, Alibaba released a collection of models, Qwen3, which allegedly outperform some of OpenAI’s and Google’s best coding models on Codeforces, a programming benchmark.

To regain the open model lead, Meta simply needs to deliver superior models, according to Ravid Shwartz-Ziv, an AI researcher at NYU’s Center for Data Science. That may involve taking more risks, like employing new techniques, he told TechCrunch.

Whether Meta is in a position to take big risks right now is unclear. Current and former employees previously told Fortune Meta’s AI research lab is “dying a slow death.” The company’s VP of AI Research, Joelle Pineau, announced this month that she was leaving.

LlamaCon is Meta’s chance to show what it’s been cooking to beat upcoming releases from AI labs like OpenAI, Google, xAI, and others. If it fails to deliver, the company could fall even further behind in the ultra-competitive space.

Read More

The deadline to book your exhibit table for TechCrunch Sessions: AI is May 9

We’re in the final stretch. TechCrunch Sessions: AI takes over Zellerbach Hall in almost a month, and the exhibit floor is almost completely booked.

If you’ve been considering showing off your AI product or innovation, now’s the time to commit. Exhibit tables are in short supply, and the May 9 deadline is fast approaching.

At TC Sessions: AI on June 5, AI leaders, engineers, founders, researchers, investors, and visionaries will gather to explore what’s next in artificial intelligence. They’re coming to scout what’s cutting-edge, what’s investable, and what’s actually moving the industry forward.

If that sounds like your company, you need to have your brand in the room — in front of the AI ecosystem — for the day. Book your table now before time, or the tables, run out.

What you gain from exhibiting

Showcase your solution to a highly targeted AI audience.

Strengthen your brand’s credibility and visibility.

Connect with potential partners, clients, and media.

Be seen as a serious player in the future of AI.

What’s included for exhibitors

Full-day exhibit space (6’ x 3’) in a high-traffic area

Branding across the event, website, and event app

Tickets for you and your team

Access to lead-generation tools

And more benefits to help you maximize your impact

Exhibit tables are available until they sell out or until May 9 at 11:59 p.m. PT — whichever comes first. And at the pace they’re moving, we wouldn’t wait.

Secure your exhibit table now and make a brand impact at TC Sessions: AI on June 5.

Image Credits:Halo Creative

Read More

Anthropic co-founder Jared Kaplan is coming to TechCrunch Sessions: AI

Hungry to learn more about Anthropic, directly from Anthropic? You aren’t alone if so, which is why we’re so delighted to announce that Anthropic co-founder and Chief Science Officer Jared Kaplan is joining the main stage at TechCrunch Sessions: AI on June 5 at UC Berkeley’s Zellerbach Hall.

Lean into this session exploring the frontier of AI with Kaplan — and save big with Early Bird pricing. Save $210 on your ticket and get 50% off a second when you register by May 4 at 11:59 p.m. PT. Don’t wait — register now to secure your savings.

About Jared Kaplan’s session

Kaplan will take TC Sessions: AI attendees behind the scenes on hybrid reasoning models — which balance quick responses to simple queries with deeper processing for complex problems — and share insights into Anthropic‘s risk-governance framework for mitigating potential AI risks. (Kaplan was appointed Anthropic’s responsible scaling officer in October.)

Get the details on his session and check out all the AI trailblazers joining us — visit the TC Sessions: AI agenda page.

Get to know Kaplan

Jared Kaplan has a pretty remarkable résumé. Before co-founding Anthropic, he spent 15 years as a theoretical physicist at Johns Hopkins University, exploring quantum gravity, field theory, and cosmology. Since then, his research on scaling laws has been credited with revolutionizing how the AI industry understands and predicts the behavior of advanced systems. In fact, before Anthropic, Kaplan played a role in developing GPT-3 and Codex at OpenAI; meanwhile, at Anthropic, Kaplan helped develop Claude, the company’s family of AI assistants.

It’s been a wild ride for Kaplan and company. Anthropic’s remarkable growth has been fueled by several major developments in just recent months alone, including its launch of Claude 3.7 Sonnet in late February, which the company described as its “most intelligent model yet” and the first hybrid reasoning model that can handle both simple and complex queries with appropriate processing time for each.

The company more recently introduced an autonomous research capability and Google Workspace integration, transforming Claude into what Anthropic has characterized as a “true virtual collaborator” for enterprise users.(Anthropic is reportedly developing a voice assistant feature for Claude to compete with similar offerings from other AI companies.)

Unsurprisingly, investors have noticed. In March, Anthropic announced that it had completed a new fundraising deal that valued the company at $61.5 billion, up from about $16 billion roughly a year earlier.

Get the inside track on AI with big ticket savings

At TC Sessions: AI, Kaplan will share his vision for how AI will transform human-computer interaction, work processes, and social dynamics. But beyond the theoretical and technical aspects, Kaplan will offer tactical takeaways for teams of all sizes that are looking to implement AI and maximize its impact.

One of AI’s sharpest voices is taking the stage — and you won’t want to miss it. Save $210 when you register today, and score 50% off an extra ticket for your plus-one. Lock in your spot here at this must-see session!

Read More

Final 6 days: Save big and bring a plus-one for 50% off to TechCrunch Sessions: AI

The AI revolution isn’t coming — it’s already underway, and the time to grab your Early Bird ticket ends in just 6 days. Don’t miss your chance to dive into the AI ecosystem and lock in the lowest rates.

On June 5, TechCrunch Sessions: AI takes over UC Berkeley’s Zellerbach Hall for a one-day gathering of the world’s top founders, investors, researchers, technologists, and enthusiasts who are actively shaping the future of AI.

Early Bird pricing ends soon — save up to $210 on your pass, and get 50% off a second ticket for your co-founder, colleague, or AI-obsessed friend. Register here to lock in your savings.

What to expect at TechCrunch Sessions

See where AI is headed: Hear directly from Anthropic’s Jared Kaplan — and many more pioneers — on the frontier of AI innovation. Explore the full speaker lineup and session agenda here.

Learn what VCs are really looking for: Find out what today’s top investors want in AI startups — beyond the hype — across main stage talks and breakout sessions.

Explore the future of AI safety and policy: Dive into private deployment models, AI safety challenges, and global policy shifts shaping the industry.

Get hands-on in breakout sessions: Bring your questions to experts from OpenAI, Cohere, and others in interactive breakout discussions.

Power up your network: Set up high-impact 1:1 and small-group meetings to forge valuable connections and gain actionable insights.

Discover the next wave of AI solutions: Explore cutting-edge tools, products, and startups in the Expo Hall.

If you’re serious about building, backing, or shaping the next wave of AI innovation, this is the room you need to be in.

Lock in your ticket now + save 50% on the second — Early Bird pricing ends May 4 at 11:59 p.m. PT.

Image Credits:Max Morse for TechCrunch

Image Credits:Max Morse

Make your brand impossible to miss in the AI world

Time’s running out to reserve your exhibit booth at TC Sessions: AI. Secure your spot by May 9 — or before tables sell out. Showcase your brand to AI leaders, VCs, innovators, and visionaries. It’s your chance to highlight your tech and connect with the people shaping the future of AI. Secure your table here.

Image Credits:Halo Creative

Read More

1 2 3 14