Explore Technology

Deel files countersuit against Rippling as rivalry escalates

In the latest development of an increasingly public dispute between HR and payroll services rivals, Deel has filed a countersuit against Rippling. To recap: Rippling

Info Insight

Deel files countersuit against Rippling as rivalry escalates

In the latest development of an increasingly public dispute between HR and payroll services rivals,…

Slate Auto eyes former Indiana printing plant for its EV truck production

Slate Auto, the buzzy new EV startup that broke stealth this week, is close to…

TechCrunch StrictlyVC in Athens in May will feature a special guest: Greece’s prime minister

We’re thrilled to announce that Greece’s prime minister, Kyriakos Mitsotakis, will be joining us at…

Editors’ Pick

Latest Tech Updates

Deel files countersuit against Rippling as rivalry escalates

In the latest development of an increasingly public dispute between HR and payroll services rivals, Deel has filed a countersuit against Rippling.

To recap: Rippling publicly announced on March 17 that it was suing Deel over alleged corporate espionage, with accusations ranging from violation of the RICO racketeering act (typically used to prosecute organized crime) to misappropriation of trade secrets and unfair competition. Deel is now slamming that lawsuit as part of a “campaign to try to impugn Deel’s reputation.”

That original lawsuit included an affidavit from the alleged spy that reads like a movie script. Deel had previously denied all wrongdoing.

Now the startup is taking things a step further. In a blog post Friday, Deel announced it has filed a civil suit against Rippling in the Superior Court in Delaware.

Deel’s complaint, dated April 24 and reviewed by TechCrunch, paints an unflattering picture of Rippling CEO Parker Conrad, describing the executive as “haunted by his previous failures, and now fueled by suffocating jealousy at his inability to fairly compete with Deel in the marketplace.”

In response, Conrad took to X to post that, “Nowhere does Deel dispute our central allegation – that @Bouazizalex personally recruited a spy to steal rippling’s trade secrets, and personally directed the theft.”

Specifically, Deel filed three motions addressing Rippling’s March lawsuit, including:

A motion to dismiss on Forum Non Conveniens grounds in favor of Ireland – arguing the case should be resolved where “Rippling previously initiated litigation” against Keith O’Brien, the alleged spy, and has now named Deel and several executives, including CEO and co-founder Alex Bouaziz.

A motion to dismiss under Rule 12b6 – citing “Rippling’s failure to state a viable claim against Deel.”

An anti-SLAPP motion – ”to stem Rippling’s attempt, through litigation, to infringe on Deel’s protected conduct.”

In its complaint, Deel makes counter-accusations, alleging that Rippling solicited Deel employees “to pass on to Rippling confidential commercially sensitive information about Deel.” The filing further accuses Rippling of placing its own “insider at Deel, essentially allowing it to eavesdrop on Deel’s internal communications without Deel’s permission.” 

As of April 14, Rippling was attempting to serve Alex Bouaziz with legal papers. However, French bailiffs hired by Rippling couldn’t seem to find Bouaziz. On April 15, TechCrunch reported that Deel’s CEO was in Dubai, further complicating Ripple’s efforts to serve him. A Deel spokesperson told TechCrunch on Friday: “Alex lives in Israel. He was in Dubai for a few days for Passover with his family, something he’s done for the past several years.”

Slate Auto eyes former Indiana printing plant for its EV truck production

Slate Auto, the buzzy new EV startup that broke stealth this week, is close to locking in a former printing plant located in Warsaw, Indiana as the future production site for its cheap electric truck, a review of public records shows.

The company is expected to lease the 1.4 million-square-foot facility for an undisclosed sum. Economic development officials told local media earlier this year (without naming Slate) the factory could employ up to 2,000 people, and that the county offered the undisclosed company an incentive package.

It’s not immediately clear what that incentive package includes or if it has been finalized. Slate did not immediately respond to a request for comment. Peggy Friday, the CEO of the Kosciusko County Economic Development Corporation said in an email that she is “under a strict non-disclosure agreement with the project.”

Slate showed an aerial photo of the factory during Thursday’s event. The company did not say where it was located, but the photo matches a public listing for the facility available on the Indiana Economic Development Corporation’s website. TechCrunch previously reported that the company planned to make its EVs, which will cost under $20,000 after the federal tax credit, in Indiana.

Image Credits:Slate Auto

“Our truck will be made here in the USA as part of our commitment to re-industrializing America,” Slate’s CEO Chris Barman said onstage while the factory photo was displayed on a screen behind her.

Slate’s focus on domestic manufacturing is embedded in the company’s DNA. The startup was originally created inside of Re:Build Manufacturing, a Massachusetts-based company focused on beefing up the country’s ability to make things.

The factory in Warsaw was built in 1958, and was occupied for decades by printing company R.R. Donnelly. It has been dormant for around two years, according to local media.

Converting a factory, especially one that was not previously pumping out cars, is no cheap or easy task. Slate has amassed a serious war chest to help tackle that goal. Backed in part by Amazon founder Jeff Bezos, Guggenheim Partners CEO Mark Walter, and powerhouse VC firm General Catalyst, the startup has raised well over $100 million to date.

The approach Slate is taking in designing and building its electric truck should help keep costs down, too. The company plans to sell wraps for the trucks instead of painting them, meaning it does not need to build a paint shop at the factory. That alone could save Slate hundreds of millions in the plant buildout process.

TechCrunch StrictlyVC in Athens in May will feature a special guest: Greece’s prime minister

We’re thrilled to announce that Greece’s prime minister, Kyriakos Mitsotakis, will be joining us at our upcoming StrictlyVC event in Athens, co-hosted with Endeavor, on Thursday night, May 8, at the stunning Stavros Niarchos Foundation Cultural Center.

For those who might not be familiar with his background, Mitsotakis brings a fascinating blend of experiences to the table. Before entering politics, he worked at both McKinsey and Chase Investment Bank, giving him firsthand experience in the business world that many operators throughout the startup ecosystem can appreciate. The youngest of four children, he also has some Silicon Valley-esque academic credentials — he headed to Harvard, then to Stanford for a master’s degree in international relations, and finally nabbed an MBA at Harvard Business School — and says his education has long shaped his vision for Greece’s future.

Mitsotakis has also been championing Greece’s tech transformation for many years. In fact, after navigating the country through the pandemic, he has doubled down on positioning Athens as an emerging tech hub, recently introducing initiatives to attract international talent, including tax incentives and reforms aimed at cutting bureaucratic red tape for new businesses.

The prime minister comes from a political family — his father was prime minister and his sister was mayor of Athens — but he has carved out his own reputation as a reformer focused on modernizing the Greek economy. His administration has been particularly interested in how tech can help diversify renowned traditional Greek strengths like shipping and tourism.

StrictlyVC events are constrained by design to give attendees a unique opportunity for investors, founders, and ecosystem builders to engage directly with power players like the prime minister, so if you want to ask about his government’s vision for Greece’s tech future, and how the country fits into the broader European innovation landscape, this could be your chance.

You can check out more details here to learn more about the agenda and other speakers (you can also buy tickets while they are still available). Registration is now open for what promises to be a fun evening, filled with illuminating discussions, and this chat — with one of Europe’s most interesting political leaders in Greece’s emerging technology narrative — is definitely one you won’t want to miss. Register for your StrictlyVC Greece ticket here.

The TechCrunch Cyber Glossary

The cybersecurity world is full of jargon and lingo. At TechCrunch, we have been writing about cybersecurity for years, and we frequently use technical terms and expressions to describe the nature of what is happening in the world. That’s why we have created this glossary, which includes some of the most common — and not so common — words and expressions that we use in our articles, and explanations of how, and why, we use them. 

This is a developing compendium, and we will update it regularly. If you have any feedback or suggestions for this glossary, get in touch.

An advanced persistent threat (APT) is often categorized as a hacker, or group of hackers, which gains and maintains unauthorized access to a targeted system. The main aim of an APT intruder is to remain undetected for long periods of time, often to conduct espionage and surveillance, to steal data, or sabotage critical systems.

APTs are traditionally well-resourced hackers, including the funding to pay for their malicious campaigns, and access to hacking tools typically reserved by governments. As such, many of the long-running APT groups are associated with nation states, like China, Iran, North Korea, and Russia. In recent years, we’ve seen examples of non-nation state cybercriminal groups that are financially motivated (such as theft and money laundering) carrying out cyberattacks similar in terms of persistence and capabilities as some traditional government-backed APT groups.

(See: Hacker)

An adversary-in-the-middle (AitM) attack, traditionally known as a “man-in-the-middle” (MitM), is where someone intercepts network traffic at a particular point on the network in an attempt to eavesdrop or modify the data as it travels the internet. This is why encrypting data makes it more difficult for malicious actors to read or understand a person’s network traffic, which could contain personal information or secrets, like passwords. Adversary-in-the-middle attacks can be used legitimately by security researchers to help understand what data goes in and out of an app or web service, a process that can help identify security bugs and data exposures.

The ability to run commands or malicious code on an affected system, often because of a security vulnerability in the system’s software. Arbitrary code execution can be achieved either remotely or with physical access to an affected system (such as someone’s device). In the cases where arbitrary code execution can be achieved over the internet, security researchers typically call this remote code execution. 

Often, code execution is used as a way to plant a back door for maintaining long-term and persistent access to that system, or for running malware that can be used to access deeper parts of the system or other devices on the same network. 

(See also: Remote code execution)

Attribution is the process of finding out and identifying who is behind a cyberattack. There is an often repeated mantra, “attribution is hard,” which is to warn cybersecurity professionals and the wider public that definitively establishing who was behind a cyberattack is no simple task. While it is not impossible to attribute, the answer is also dependent on the level of confidence in the assessment.

Threat intelligence companies such as CrowdStrike, Kaspersky, and Mandiant, among others, have for years attributed cyberattacks and data breaches to groups or “clusters” of hackers, often referencing groups by a specific codename, based on a pattern of certain tactics, techniques and procedures as seen in previous attacks. Some threat intelligence firms go as far as publicly linking certain groups of hackers to specific governments or their intelligence agencies when the evidence points to it. 

Government agencies, however, have for years publicly accused other governments and countries of being behind cyberattacks, and have gone as far as identifying — and sometimes criminally charging — specific people working for those agencies.

A backdoor is a subjective term, but broadly refers to creating the means to gain future access to a system, device, or physical area. Backdoors can be found in software or hardware, such as a mechanism to gain access to a system (or space) in case of accidental lock-out, or for remotely providing technical support over the internet. Backdoors can have legitimate and helpful use cases, but backdoors can also be undocumented, maliciously planted, or otherwise unknown to the user or owner, which can weaken the security of the product and make it more susceptible to hacking or compromise.

TechCrunch has a deeper dive on encryption backdoors.

Hackers historically have been categorized as either “black hat” or “white hat,” usually depending on the motivations of the hacking activity carried out. A “black hat” hacker may be someone who might break the law and hack for money or personal gain, such as a cybercriminal. “White hat” hackers generally hack within legal bounds, like as part of a penetration test sanctioned by the target company, or to collect bug bounties finding flaws in various software and disclosing them to the affected vendor. For those who hack with less clearcut motivations, they may be regarded as a “gray hat.” Famously, the hacking group the L0pht used the term gray hat in an interview with The New York Times Magazine in 1999. While still commonly used in modern security parlance, many have moved away from the “hat” terminology. 

(Also see: Hacker, Hacktivist)

Botnets are networks of hijacked internet-connected devices, such as webcams and home routers, that have been compromised by malware (or sometimes weak or default passwords) for the purposes of being used in cyberattacks. Botnets can be made up of hundreds or thousands of devices and are typically controlled by a command-and-control server that sends out commands to ensnared devices. Botnets can be used for a range of malicious reasons, like using the distributed network of devices to mask and shield the internet traffic of cybercriminals, deliver malware, or harness their collective bandwidth to maliciously crash websites and online services with huge amounts of junk internet traffic. 

(See also: Command-and-control server; Distributed denial-of-service)

A brute-force attack is a common and rudimentary method of hacking into accounts or systems by automatically trying different combinations and permutations of letters and words to guess passwords. A less sophisticated brute-force attack is one that uses a “dictionary,” meaning a list of known and common passwords, for example. A well designed system should prevent these types of attacks by limiting the number of login attempts inside a specific timeframe, a solution called rate-limiting. 

A bug is essentially the cause of a software glitch, such as an error or a problem that causes the software to crash or behave in an unexpected way. In some cases, a bug can also be a security vulnerability. 

The term “bug” originated in 1947, at a time when early computers were the size of rooms and made up of heavy mechanical and moving equipment. The first known incident of a bug found in a computer was when a moth disrupted the electronics of one of these room-sized computers.

(See also: Vulnerability)

Command-and-control servers (also known as C2 servers) are used by cybercriminals to remotely manage and control their fleets of compromised devices and launch cyberattacks, such as delivering malware over the internet and launching distributed denial-of-service attacks.

(See also: Botnet; Distributed denial-of-service)

This is a word that can have two meanings depending on the context. Traditionally, in the context of computer science and cybersecurity, crypto is short for “cryptography,” the mathematical field of coding and decoding messages and data using encryption.

Crypto has more recently also become short for cryptocurrency, such as Bitcoin, Ethereum, and the myriad blockchain-based decentralized digital currencies that have sprung up in the last fifteen years. As cryptocurrencies have grown from a niche community to a whole industry, crypto is now also used to refer to that whole industry and community. 

For years, the cryptography and cybersecurity community have wrestled with the adoption of this new meaning, going as far as making the phrases “crypto is not cryptocurrency” and “crypto means cryptography” into something that features on its own dedicated website and even T-shirts. 

Languages change over time depending on how people use words. As such, TechCrunch accepts the reality where crypto has different meanings depending on context, and where the context isn’t clear, then we spell out cryptography, or cryptocurrency. 

Cryptojacking is when a device’s computational power is used, with or without the owner’s permission, to generate cryptocurrency. Developers sometimes bundle code in apps and on websites, which then uses the device’s processors to complete complex mathematical calculations needed to create new cryptocurrency. The generated cryptocurrency is then deposited in virtual wallets owned by the developer. 

Some malicious hackers use malware to deliberately compromise large numbers of unwitting computers to generate cryptocurrency on a large and distributed scale.

The world wide web is the public content that flows across the pipes of the internet, much of what is online today is for anyone to access at any time. The “deep web,” however, is the content that is kept behind paywalls and member-only spaces, or any part of the web that is not readily accessible or browsable with a search engine. Then there is the “dark web,” which is the part of the internet that allows users to remain anonymous but requires certain software (such as the Tor Browser) to access, depending on the part of the dark web you’re trying to access.

Anonymity benefits those who live and work in highly censored or surveilled countries, but it also can benefit criminals. There is nothing inherently criminal or nefarious about accessing the dark web; many popular websites also offer dark web versions so that users around the world can access their content. TechCrunch has a more detailed explainer on what the dark web is.

When we talk about data breaches, we ultimately mean the improper removal of data from where it should have been. But the circumstances matter and can alter the terminology we use to describe a particular incident. 

A data breach is when protected data was confirmed to have improperly left a system from where it was originally stored and usually confirmed when someone discovers the compromised data. More often than not, we’re referring to the exfiltration of data by a malicious cyberattacker or otherwise detected as a result of an inadvertent exposure. Depending on what is known about the incident, we may describe it in more specific terms where details are known.

(See also: Data exposure; Data leak)

A data exposure (a type of data breach) is when protected data is stored on a system that has no access controls, such as because of human error or a misconfiguration. This might include cases where a system or database is connected to the internet but without a password. Just because data was exposed doesn’t mean the data was actively discovered, but nevertheless could still be considered a data breach. 

A data leak (a type of data breach) is where protected data is stored on a system in a way that it was allowed to escape, such as due to a previously unknown vulnerability in the system or by way of insider access (such as an employee). A data leak can mean that data could have been exfiltrated or otherwise collected, but there may not always be the technical means, such as logs, to know for sure.

Deepfakes are AI-generated videos, audios, or pictures designed to look real, often with the goal of fooling people into thinking they are genuine. Deepfakes are developed with a specific type of machine learning known as deep learning, hence its name. Examples of deepfakes can range from relatively harmless, like a video of a celebrity saying something funny or outrageous, to more harmful efforts. In recent years, there have been documented cases of deepfaked political content designed to discredit politicians and influence voters, while other malicious deepfakes have relied on using recordings of executives designed to trick company employees into giving up sensitive information or sending money to scammers. Deepfakes are also contributing to the proliferation of nonconsensual sexual images.  

Def Con is one of the most important hacking conferences in the world, held annually in Las Vegas, usually during August. Launched in 1993 as a party for some hacker friends, it has now become an annual gathering of almost 30,000 hackers and cybersecurity professionals, with dozens of talks, capture-the-flag hacking competitions, and themed “villages,” where attendees can learn how to hack internet-connected devices, voting systems, and even aircraft. Unlike other conferences like RSA or Black Hat, Def Con is decidedly not a business conference, and the focus is much more on hacker culture. There is a vendor area, but it usually includes nonprofits like the Electronic Frontier Foundation, The Calyx Institute, and the Tor Project, as well as relatively small cybersecurity companies.

A distributed denial-of-service, or DDoS, is a kind of cyberattack that involves flooding targets on the internet with junk web traffic in order to overload and crash the servers and cause the service, such as a website, online store, or gaming platform to go down. 

DDoS attacks are launched by botnets, which are made up of networks of hacked internet-connected devices (such as home routers and webcams) that can be remotely controlled by a malicious operator, usually from a command-and-control server. Botnets can be made up of hundreds or thousands of hijacked devices.

While a DDoS is a form of cyberattack, these data-flooding attacks are not “hacks” in themselves, as they don’t involve the breach and exfiltration of data from their targets, but instead cause a “denial of service” event to the affected service.

(See also: Botnet; Command-and-control server)

Encryption is the way and means in which information, such as files, documents, and private messages, are scrambled to make the data unreadable to anyone other than to its intended owner or recipient. Encrypted data is typically scrambled using an encryption algorithm — essentially a set of mathematical formulas that determines how the data should be encrypted — along with a private key, such as a password, which can be used to unscramble (or “decrypt”) the protected data.

Nearly all modern encryption algorithms in use today are open source, allowing anyone (including security professionals and cryptographers) to review and check the algorithm to make sure it’s free of faults or flaws. Some encryption algorithms are stronger than others, meaning data protected by some weaker algorithms can be decrypted by harnessing large amounts of computational power.

Encryption is different from encoding, which simply converts data into a different and standardized format, usually for the benefit of allowing computers to read the data.

(See also: End-to-end encryption)

End-to-end encryption (or E2EE) is a security feature built into many messaging and file-sharing apps, and is widely considered one of the strongest ways of securing digital communications as they traverse the internet.

E2EE scrambles the file or message on the sender’s device before it’s sent in a way that allows only the intended recipient to decrypt its contents, making it near-impossible for anyone — including a malicious hacker, or even the app maker — to snoop inside on someone’s private communications. In recent years, E2EE has become the default security standard for many messaging apps, including Apple’s iMessage, Facebook Messenger, Signal, and WhatsApp. 

E2EE has also become the subject of governmental frustration in recent years, as encryption makes it impossible for tech companies or app providers to give over information that they themselves do not have access to.

(See also: Encryption)

Most modern systems are protected with multiple layers of security, including the ability to set user accounts with more restricted access to the underlying system’s configurations and settings. This prevents these users — or anyone with improper access to one of these user accounts — from tampering with the core underlying system. However, an “escalation of privileges” event can involve exploiting a bug or tricking the system into granting the user more access rights than they should have. 

Malware can also take advantage of bugs or flaws caused by escalation of privileges by gaining deeper access to a device or a connected network, potentially allowing the malware to spread.

When we talk about espionage, we’re generally referring to threat groups or hacking campaigns that are dedicated to spying, and are typically characterized by their stealth. Espionage-related hacks are usually aimed at gaining and maintaining stealthy persistent access to a target’s network to carry out passive surveillance, reconnaissance for future cyberattacks, or the long-term collection and exfiltration of data. Espionage operations are often carried out by governments and intelligence agencies, though not exclusively.

An exploit is the way and means in which a vulnerability is abused or taken advantage of, usually in order to break into a system. 

(See also: Bug; Vulnerability)

In general terms, extortion is the act of obtaining something, usually money, through the use of force and intimidation. Cyber extortion is no different, as it typically refers to a category of cybercrime whereby attackers demand payment from victims by threatening to damage, disrupt, or expose their sensitive information. 

Extortion is often used in ransomware attacks, where hackers typically exfiltrate company data before demanding a ransom payment from the hacked victim. But extortion has quickly become its own category of cybercrime, with many, often younger, financially motivated hackers, opting to carry out extortion-only attacks, which snub the use of encryption in favor of simple data theft.

(Also see: Ransomware) 

Forensic investigations involve analyzing data and information contained in a computer, server, or mobile device, looking for evidence of a hack, crime, or some sort of malfeasance. Sometimes, in order to access the data, corporate or law enforcement investigators rely on specialized devices and tools, like those made by Cellebrite and Grayshift, which are designed to unlock and break the security of computers and cellphones to access the data within.

There is no one single definition of “hacker.” The term has its own rich history, culture, and meaning within the security community. Some incorrectly conflate hackers, or hacking, with wrongdoing. 

By our definition and use, we broadly refer to a “hacker” as someone who is a “breaker of things,” usually by altering how something works to make it perform differently in order to meet their objectives. In practice, that can be something as simple as repairing a machine with non-official parts to make it function differently as intended, or work even better. 

In the cybersecurity sense, a hacker is typically someone who breaks a system or breaks the security of a system. That could be anything from an internet-connected computer system to a simple door lock. But the person’s intentions and motivations (if known) matter in our reporting, and guides how we accurately describe the person, or their activity. 

There are ethical and legal differences between a hacker who works as a security researcher, who is professionally tasked with breaking into a company’s systems with their permission to identify security weaknesses that can be fixed before a malicious individual has a chance to exploit them; and a malicious hacker who gains unauthorized access to a system and steals data without obtaining anyone’s permission.

Because the term “hacker” is inherently neutral, we generally apply descriptors in our reporting to provide context about who we’re talking about. If we know that an individual works for a government and is contracted to maliciously steal data from a rival government, we’re likely to describe them as a nation-state or government hacker (or, if appropriate, an advanced persistent threat), for example. If a gang is known to use malware to steal funds from individuals’ bank accounts, we may describe them as financially motivated hackers, or if there is evidence of criminality or illegality (such as an indictment), we may describe them simply as cybercriminals.

And, if we don’t know motivations or intentions, or a person describes themselves as such, we may simply refer to a subject neutrally as a “hacker,” where appropriate.

(Also see: Advanced persistent threat; Hacktivist; Unauthorized)

Sometimes, hacking and stealing data is only the first step. In some cases, hackers then leak the stolen data to journalists, or directly post the data online for anyone to see. The goal can be either to embarrass the hacking victim, or to expose alleged malfeasance. 

The origins of modern hack-and-leak operations date back to the early- and mid-2000s, when groups like el8, pHC (“Phrack High Council”) and zf0 were targeting people in the cybersecurity industry who, according to these groups, had foregone the hacker ethos and had sold out. Later, there are the examples of hackers associated with Anonymous and leaking data from U.S. government contractor HBGary, and North Korean hackers leaking emails stolen from Sony as retribution for the Hollywood comedy, The Interview. 

Some of the most recent and famous examples are the hack against the now-defunct government spyware pioneer Hacking Team in 2015, and the infamous Russian government-led hack-and-leak of Democratic National Committee emails ahead of the 2016 U.S. presidential elections. Iranian government hackers tried to emulate the 2016 playbook during the 2024 elections. 

A particular kind of hacker who hacks for what they — and perhaps the public — perceive as a good cause, hence the portmanteau of the words “hacker” and “activist.” Hacktivism has been around for more than two decades, starting perhaps with groups like the Cult of the Dead Cow in the late 1990s. Since then, there have been several high profile examples of hacktivist hackers and groups, such as Anonymous, LulzSec, and Phineas Fisher. 

(Also see: Hacker)

Short for “information security,” an alternative term used to describe defensive cybersecurity focused on the protection of data and information. “Infosec” may be the preferred term for industry veterans, while the term “cybersecurity” has become widely accepted. In modern times, the two terms have become largely interchangeable. 

Infostealers are malware capable of stealing information from a person’s computer or device. Infostealers are often bundled in pirated software, like Redline, which when installed will primarily seek out passwords and other credentials stored in the person’s browser or password manager, then surreptitiously upload the victim’s passwords to the attacker’s systems. This lets the attacker sign in using those stolen passwords. Some infostealers are also capable of stealing session tokens from a user’s browser, which allow the attacker to sign in to a person’s online account as if they were that user but without needing their password or multi-factor authentication code.

(See also: Malware)

Jailbreaking is used in several contexts to mean the use of exploits and other hacking techniques to circumvent the security of a device, or removing the restrictions a manufacturer puts on hardware or software. In the context of iPhones, for example, a jailbreak is a technique to remove Apple’s restrictions on installing apps outside of its “walled garden” or to gain the ability to conduct security research on Apple devices, which is normally highly restricted. In the context of AI, jailbreaking means figuring out a way to get a chatbot to give out information that it’s not supposed to. 

The kernel, as its name suggests, is the core part of an operating system that connects and controls virtually all hardware and software. As such, the kernel has the highest level of privileges, meaning it has access to virtually any data on the device. That’s why, for example, apps such as antivirus and anti-cheat software run at the kernel level, as they require broad access to the device. Having kernel access allows these apps to monitor for malicious code.

Malware is a broad umbrella term that describes malicious software. Malware can land in many forms and be used to exploit systems in different ways. As such, malware that is used for specific purposes can often be referred to as its own subcategory. For example, the type of malware used for conducting surveillance on people’s devices is also called “spyware,” while malware that encrypts files and demands money from its victims is called “ransomware.”

(See also: Infostealers; Ransomware; Spyware)

Metadata is information about something digital, rather than its contents. That can include details about the size of a file or document, who created it, and when, or in the case of digital photos, where the image was taken and information about the device that took the photo. Metadata may not identify the contents of a file, but it can be useful in determining where a document came from or who authored it. Metadata can also refer to information about an exchange, such as who made a call or sent a text message, but not the contents of the call or the message.

Multi-factor authentication (MFA) is the common umbrella term for describing when a person must provide a second piece of information, aside from a username and password, to log into a system. MFA (or two-factor; also known as 2FA) can prevent malicious hackers from re-using a person’s stolen credentials by requiring a time-sensitive code sent to or generated from a registered device owned by the account holder, or the use of a physical token or key. 

Operational security, or OPSEC for short, is the practice of keeping information secret in various situations. Practicing OPSEC means thinking about what information you are trying to protect, from whom, and how you’re going to protect it. OPSEC is less about what tools you are using, and more about how you are using them and for what purpose. 

For example, government officials discussing plans to bomb foreign countries on Signal are practicing bad OPSEC because the app is not designed for that use-case, and runs on devices that are more vulnerable to hackers than highly restricted systems specifically designed for military communications. On the other hand, journalists using Signal to talk to sensitive sources is generally good OPSEC because it makes it harder for those communications to be intercepted by eavesdroppers.

(See also: Threat model)

Also known as “pen-testing,” this is the process where security researchers “stress-test” the security of a product, network, or system, usually by attempting to modify the way that the product typically operates. Software makers may ask for a pen-test on a product, or of their internal network, to ensure that they are free from serious or critical security vulnerabilities, though a pen-test does not guarantee that a product will be completely bug-free.

Phishing is a type of cyberattack where hackers trick their targets into clicking or tapping on a malicious link, or opening a malicious attachment. The term derives from “fishing,” because hackers often use “lures” to convincingly trick their targets in these types of attacks. A phishing lure could be attachment coming from an email address that appears to be legitimate, or even an email spoofing the email address of a person that the target really knows. Sometimes, the lure could be something that might appear to be important to the target, like sending a forged document to a journalist that appears to show corruption, or a fake conference invite for human rights defenders. There is an often cited adage by the well-known cybersecurity influencer The Grugq, which encapsulates the value of phishing: “Give a man an 0day and he’ll have access for a day, teach a man to phish and he’ll have access for life.”

(Also see: Social engineering)

Ransomware is a type of malicious software (or malware) that prevents device owners from accessing its data, typically by encrypting the person’s files. Ransomware is usually deployed by cybercriminal gangs who demand a ransom payment — usually cryptocurrency — in return for providing the private key to decrypt the person’s data.

In some cases, ransomware gangs will steal the victim’s data before encrypting it, allowing the criminals to extort the victim further by threatening to publish the files online. Paying a ransomware gang is no guarantee that the victim will get their stolen data back, or that the gang will delete the stolen data.

One of the first-ever ransomware attacks was documented in 1989, in which malware was distributed via floppy disk (an early form of removable storage) to attendees of the World Health Organization’s AIDS conference. Since then, ransomware has evolved into a multibillion-dollar criminal industry as attackers refine their tactics and hone in on big-name corporate victims.

(See also: Malware; Sanctions)

Remote code execution refers to the ability to run commands or malicious code (such as malware) on a system from over a network, often the internet, without requiring any human interaction from the target. Remote code execution attacks can range in complexity but can be highly damaging when vulnerabilities are exploited.

(See also: Arbitrary code execution)

Cybersecurity-related sanctions work similarly to traditional sanctions in that they make it illegal for businesses or individuals to transact with a sanctioned entity. In the case of cyber sanctions, these entities are suspected of carrying out malicious cyber-enabled activities, such as ransomware attacks or the laundering of ransom payments made to hackers.

The U.S. Treasury’s Office of Foreign Assets Control (OFAC) administers sanctions. The Treasury’s Cyber-Related Sanctions Program was established in 2015 as part of the Obama administration’s response to cyberattacks targeting U.S. government agencies and private sector U.S. entities.

While a relatively new addition to the U.S. government’s bureaucratic armory against ransomware groups, sanctions are increasingly used to hamper and deter malicious state actors from conducting cyberattacks. Sanctions are often used against hackers who are out of reach of U.S. indictments or arrest warrants, such as ransomware crews based in Russia.

A sandbox is a part of a system that is isolated from the rest. The goal is to create a protected environment where a hacker can compromise the sandbox, but without allowing further access to the rest of the system. For example, mobile applications usually run in their own sandboxes. If hackers compromise a browser, for example, they cannot immediately compromise the operating system or another app on the same device. 

Security researchers also use sandboxes in both physical and virtual environments (such as a virtual machine) to analyze malicious code without risking compromising their own computers or networks.

SIM swapping is a type of attack where hackers hijack and take control of a person’s phone number, often with the goal of then using the phone number to log into the target’s sensitive accounts, such as their email address, bank account, or cryptocurrency wallet. This attack exploits the way that online accounts sometimes rely on a phone number as a fallback in the event of losing a password.

SIM swaps often rely on hackers using social engineering techniques to trick phone carrier employees (or bribing them) into handing over control of a person’s account, as well as hacking into carrier systems.

Social engineering is the art of human deception, and encompasses several techniques a hacker can use to deceive their target into doing something they normally would not do. Phishing, for example, can be classified as a type of social engineering attack because hackers trick targets into clicking on a malicious link or opening a malicious attachment, or calling someone on the phone while pretending to be their employer’s IT department.

Social engineering can also be used in the real world, for example, to convince building security employees to let someone who shouldn’t be allowed to enter the building. Some call it “human hacking” because social engineering attacks don’t necessarily have to involve technology. 

(Also see: Phishing)

A broad term, like malware, that covers a range of surveillance monitoring software. Spyware is typically used to refer to malware made by private companies, such as NSO Group’s Pegasus, Intellexa’s Predator, and Hacking Team’s Remote Control System, among others, which the companies sell to government agencies. In more generic terms, these types of malware are like remote access tools, which allows their operators — usually government agents — to spy and monitor their targets, giving them the ability to access a device’s camera and microphone or exfiltrate data. Spyware is also referred to as commercial or government spyware, or mercenary spyware.

(See also: Stalkerware)

Stalkerware is a kind of surveillance malware (and a form of spyware) that is usually sold to ordinary consumers under the guise of child or employee monitoring software but is often used for the purposes of spying on the phones of unwitting individuals, oftentimes spouses and domestic partners. The spyware grants access to the target’s messages, location, and more. Stalkerware typically requires physical access to a target’s device, which gives the attacker the ability to install it directly on the target’s device, often because the attacker knows the target’s passcode. 

(See also: Spyware)

What are you trying to protect? Who are you worried about that could go after you or your data? How could these attackers get to the data? The answers to these kinds of questions are what will lead you to create a threat model. In other words, threat modeling is a process that an organization or an individual has to go through to design software that is secure, and devise techniques to secure it. A threat model can be focused and specific depending on the situation. A human rights activist in an authoritarian country has a different set of adversaries, and data, to protect than a large corporation in a democratic country that is worried about ransomware, for example. 

(See also: Operational security)

When we describe “unauthorized” access, we’re referring to the accessing of a computer system by breaking any of its security features, such as a login prompt or a password, which would be considered illegal under the U.S. Computer Fraud and Abuse Act, or the CFAA. The Supreme Court in 2021 clarified the CFAA, finding that accessing a system lacking any means of authorization — for example, a database with no password — is not illegal, as you cannot break a security feature that isn’t there. 

It’s worth noting that “unauthorized” is a broadly used term and often used by companies subjectively, and as such has been used to describe malicious hackers who steal someone’s password to break in through to incidents of insider access or abuse by employees. 

A virtual private network, or VPN, is a networking technology that allows someone to “virtually” access a private network, such as their workplace or home, from anywhere else in the world. Many use a VPN provider to browse the web, thinking that this can help to avoid online surveillance.

TechCrunch has a skeptics’ guide to VPNs that can help you decide if a VPN makes sense for you. If it does, we’ll show you how to set up your own private and encrypted VPN server that only you control. And if it doesn’t, we explore some of the privacy tools and other measures you can take to meaningfully improve your privacy online.

A vulnerability (also referred to as a security flaw) is a type of bug that causes software to crash or behave in an unexpected way that affects the security of the system or its data. Sometimes, two or more vulnerabilities can be used in conjunction with each other — known as “vulnerability chaining” — to gain deeper access to a targeted system. 

(See also: Bug; Exploit)

Malicious attacks can sometimes be categorized and described by the amount of user interaction that malware, or a malicious hacker, needs in order to achieve successful compromise. One-click attacks refer to the target having to interact only once with the incoming lure, such as clicking on a malicious link or opening an attachment, to grant the intruder access. But zero-click attacks differ in that they can achieve compromise without the target having to click or tap anything. Zero-clicks are near-invisible to the target and are far more difficult to identify. As such, zero-click attacks are almost always delivered over the internet, and are often reserved for high-value targets for their stealthy capabilities, such as deploying spyware.

(Also see: Spyware)

A zero-day is a specific type of security vulnerability that has been publicly disclosed or exploited but the vendor who makes the affected hardware or software has not been given time (or “zero days”) to fix the problem. As such, there may be no immediate fix or mitigation to prevent an affected system from being compromised. This can be particularly problematic for internet-connected devices. 

(See also: Vulnerability)

First published on September 20, 2024.

Startups Weekly: Tech IPOs and deals proceed, but price matters

Welcome to Startups Weekly — your weekly recap of everything you can’t miss from the world of startups. Want it in your inbox every Friday? Sign up here.

This week confirmed that deals can still happen in a troubled world, but price considerations and adjustments are now part of the picture.

Most interesting startup stories from the week

Image Credits:Ather Energy

Uncertain times are rarely good for M&As, which raises fears that tariff turmoil may have compromised the startup exit outlook for 2025. But don’t expect a total deal drought — as confirmed by this week’s news.

Price conscious: Anysphere, the company behind Cursor, is growing so fast that an acquisition by OpenAI is reportedly off the table. Whether OpenAI will acquire Windsurf instead remains to be confirmed, but the competition between the two AI coding assistant rivals is heating up.

Plane view: Datadog acquired AI-powered data observability startup Metaplane, a YC alum that has raised some $22.2 million to date. Deal terms were not disclosed. 

Hired: Erik Torenberg became a16z’s newest partner after the VC firm acqui-hired him and his podcast network, Turpentine, whose shows are set to continue.

Downsized: Ather Energy, an Indian EV startup seeking to go public, trimmed its IPO size and target valuation, citing market conditions. 

Most interesting VC and funding news this week

Image Credits:Supabase

This week confirmed that vibe coding is as hot as can be, but startups in several other sectors also raised funding. Plus, there’s still money to be deployed into emerging markets.

Vibe coding: Supabase, an open source database startup that benefits from the hype around vibe coding tools, raised a $200 million Series D just seven months after its C round at a reported $900 million valuation, which now officially increased to $2 billion.

Adaptive Computer, a vibe coding startup that differentiates itself by focusing on non-programmers from day 1, also raised funding: a $7 million seed round led by Pebblebed.

Too many chats: Manychat, which provides an AI-enabled tool for businesses to manage and automate conversations across multiple messaging channels, raised a $140 million Series B led by Summit Partners.

Spotting flaws: Endor Labs, a startup that builds tools to scan AI-generated code for vulnerabilities, locked a $93 million Series B round led by DFJ Growth.

Sovereign AI: Formerly known as Xayn, Berlin-based legal AI startup Noxtua raised a $92.2 million Series B that follows its pivot into developing sovereign AI for law-related use cases such as drafting legal documents.

Shear money: Fintech API brokerage startup Alpaca picked up a $52 million Series C to further expand internationally.

Virtual CISO: Cynomi, a London- and Tel Aviv-based startup that provides SMBs with an AI-powered “virtual CISO,” raised a $37 million Series B co-led by Insight Partners and Entrée Capital.

Superpowers: After gathering a 150,000-person waitlist, health tech startup Superpower launched publicly and announced it raised a $30 million Series A backed by several celebrities. 

Debt financing: Froda, a Swedish fintech startup that developed a debt financing platform for SMBs, secured a $22.7 million Series B led by Swedish fund Incore Invest.

Cheat code: Chungin “Roy” Lee, a 21-year-old Columbia student who was suspended after developing a job interview cheating tool, raised $5.3 million in seed funding for his startup, Cluely, which offers an AI tool to “cheat on everything.” 

Copy-paste: Backed by more than 75 unicorn founders and VCs, Fluent Ventures is distributing $40 million to international founders, replicating proven business models in emerging markets.

Last but not least

Techstars accelerator new fund raising $150 million Image Credits:Techstars (opens in a new window)

In case you missed it, Techstars recently updated its standard deal: It will now invest $220,000 into startups entering its three-month program. That’s $100,000 more than previously, with new deal terms that mirror Y Combinator’s.

Google’s AI search numbers are growing, and that’s by design

Google started testing AI-summarized results in Google Search, AI Overviews, two years ago, and continues to expand the feature to new regions and languages. By the company’s estimation, it’s been a big success. AI Overviews is now used by more than 1.5 billion users monthly across over 100 countries.

AI Overviews compiles results from around the web to answer certain questions. When you search for something like “What is generative AI?” AI Overviews will show AI-generated text at the top of the Google Search results page. While the feature has dampened traffic to some publishers, Google sees it and other AI-powered search capabilities as potentially meaningful revenue drivers and ways to boost engagement on Search.

Last October, the company launched ads in AI Overviews. More recently, it started testing AI Mode, which lets users ask complex questions and follow-ups in the flow of Google Search. The latter is Google’s attempt to take on chat-based search interfaces like ChatGPT search and Perplexity.

During its Q1 2025 earnings call on Thursday, Google highlighted the growth of its other AI-based search products as well, including Circle to Search. Circle to Search, which lets you highlight something on your smartphone’s screen and ask questions about it, is now available on more than 250 million devices, Google said — up from around 200 million devices as of late last year. Circle to Search usage rose close to 40% quarter-over-quarter, according to the company.

Google also noted in its call that visual searches on its platforms are growing at a steady clip. According to CEO Sundar Pichai, searches through Google Lens, Google’s multimodal AI-powered search technology, have increased by 5 billion since October. The number of people shopping on Lens was up over 10% in Q1, meanwhile.

The growth comes amid intense regulatory scrutiny of Google’s search practices. The U.S. Department of Justice has been pressuring Google to spin off Chrome after the court found that the tech giant had an illegal online search monopoly. A federal judge has also ruled that Google has an adtech monopoly, opening the door to a potential breakup.

Roelof Botha, the head of Sequoia Capital, is coming to TechCrunch Disrupt 2025

We’re thrilled to announce that Roelof Botha, the managing partner of Sequoia Capital and one of the most influential figures in the venture capital world, will join us live onstage at TechCrunch Disrupt 2025 at Moscone West in San Francisco, which will take place from October 27 to 29.

As part of our ongoing mission to bring smart, future-focused conversations to the venture and startup community, having Botha — a leader who’s helped shape some of the most iconic tech companies of the last two decades — join us feels particularly critical this year, and for good reason.

First, the venture industry is fast evolving, with the most established venture firms beginning to operate like far broader investment firms, a once-secondary secondary market taking center stage, and the field of venture participants actively shrinking as exits become harder to achieve.

The dynamics between VCs and founders are also shifting in this new AI-powered era. Startups are growing faster than ever — and in some cases, requiring unprecedented amounts of capital to compete and thrive. We want to know from Botha how Sequoia is thinking about those shifts, from deal flow to due diligence to the expectations around building and scaling companies, especially given the powerful concentration of several of the biggest players.

Lastly, this year is the 20th anniversary of TechCrunch, and Sequoia Capital has been at the center of the startup industry about which we are so passionate. What’s more, Roelof has been a central figure in our programming over the years; we couldn’t imagine a Disrupt that celebrates this milestone anniversary without him.

Don’t miss this session with Roelof Botha and $900 ticket savings

Stay tuned to the Disrupt 2025 speaker page for more speaker announcements, and if you care about where venture is heading, this is one conversation you won’t want to miss.

Save up to $900 on your Disrupt ticket with Early Bird pricing. Don’t miss Botha — one of the most influential leaders in VC — and your chance to lock in these savings. Snag your ticket here.

Last day to boost your brand and host a Side Event at TechCrunch Sessions: AI

This is your last chance to put your brand at the center of the AI conversation during TechCrunch Sessions: AI Week — with applications to host a Side Event closing tonight at 11:59 p.m. PT.

From June 1-7, TechCrunch is curating a dynamic weeklong series of Side Events leading up to and following the main event — TC Sessions: AI, taking place June 5 at UC Berkeley’s Zellerbach Hall. These are the gatherings where off-stage magic happens — and you still have a chance to lead one.

Whether it’s a roundtable, workshop, happy hour, or meetup, your Side Event can connect with over 1,000 AI investors, builders, and thought leaders — from the event and the broader Berkeley tech scene — on your terms, in your voice, and on your turf.

Why host a Side Event?

Position your brand in front of the AI community.

Shape high-value conversations beyond the main stage.

Spark new partnerships, deals, and ideas in a more personal setting.

Get promotional support from TechCrunch and visibility on the website, event page, and event app.

Side Event perks include:

A custom discount code for your network.

Promotion on the TechCrunch Sessions: AI agenda, website, mobile app, and attendee emails.

Listing in TechCrunch Sessions: AI articles.

The opportunity to stand out during one of the most exciting weeks in AI.

The fine (but not too fine) print

There’s no fee to apply or host your Side Event. You’re in charge — from logistics and costs to promotion and everything in between. That said, we do have a few guidelines:

Events must run between June 1 and 7.

Anything on June 5 must start after 5:00 p.m. PT.

All attendees must be 18+ (or 21+ if serving drinks).

Events must be in or around Berkeley.

Apply before tonight’s deadline

Side Events are a standout way to connect with the AI community and boost your brand visibility. Apply for free and make your mark at TechCrunch Sessions: AI — the deadline is tonight at 11:59 p.m. PT.

Prince Harry meets, funds youth groups advocating for social media and AI safety

Prince Harry, Duke of Sussex, walked into the sunlit hotel conference room in Brooklyn on Thursday to meet with a dozen youth leaders working in tech safety, policy, and innovation.

The young adults chatted away at black circular tables, many unaware of his presence until he plopped down at a table and started talking with them. 

After making his way through various tables in the room, he took the stage to talk about the hopes and harms of this era of technological progress.

“Thank God you guys exist, thank God you guys are here,” he said. He spoke about tech platforms having become more powerful than governments; that these social media spaces were created based on community, yet said there has been “no responsibility to ensure the safety of those online communities.” 

At one point, he said that there were people in power only incentivized by pure profit, rather than safety and well-being. “You have the knowledge and the skillset and the confidence and the bravery and the courage to be able to stand up to these things,” he said to the crowd.

The event yesterday was hosted by the Responsible Tech Youth Power Fund (RTYPF), a grant initiative to support youth organizations working to shape the future of technology. The duke’s foundation, Archewell, which he co-founded with his wife, Meghan, Duchess of Sussex, funded the second cohort of RTYPF grantees, alongside names like Pinterest and Melinda French Gates’ Pivotal Ventures. 

TechCrunch received exclusive access to the event to chat with attendees, average age of around 22, about their work amidst the rapidly changing technological landscape. 

The young people at the event were cautiously optimistic about the future of artificial intelligence, but worried about the impact social media was having on their livelihoods. Everything is moving so fast these days, they said, faster than the law can keep up. 

“It’s not that the youth are anti-technology,” said Lydia Burns, 27, who leads youth and community partnerships at the nonprofit Seek Common Ground. “It’s just that we feel we should have more input and seats at the table to talk about how these things impact our lives.” 

Prince Harry meeting with participants at the RTYPFImage Credits:Emil Cohen for Archewell

Each turn of every conversation at the event led back to social media. 

It’s consuming every part of a young person’s life, yet the clouds have the potential to become darker, the young people said at the event. 

Adam Billen, 23, helps run the organization Encode, which advocates for safe and responsible AI. He’s worked on the Take It Down Act, seeking to tackle AI-generated porn and other pieces of legislation, like California’s SB 53 that wants to establish whistleblower protections for employees over AI-related issues. Billen, like the other young people at the event, is working fast to help the people in power understand new technology that is innovating even faster. 

“As recently as two years ago, it was just not possible for someone without technical expertise to create realistic AI nudes of someone,” he told TechCrunch. “But today, with advances in generative AI, there are apps and websites publicly available for free that are being advertised to kids,” on social media platforms. 

He’s heard of cases where young people simply take photos of their classmates, fully clothed, and then upload them to AI image platforms to get realistic nudes of their peers. Doing that is not nationally illegal yet, he said, and guardrails from Big Tech are loose. On these platforms, he said, it’s all too easy to see advertisements for tools to create deepfake porn, meaning it’s all too easy for children to find it too. 

Sneha Dave, 26, the founder of Generation Patient, an organization that advocates for the support of young people with chronic conditions, is also worried about the sharp turn social media has taken. Influencers are doing paid advertisements for prescription medications, and teenagers are being fed pharmaceutical ads on social media, she said. 

“We don’t know how the FDA works with these companies to try to flag to make sure there’s not misinformation being spread by influencers advertising these prescription medications,” Dave told TechCrunch, speaking about Big Tech platforms. 

Social media in general has become a mental health crisis, the young people told us. Yoelle Gulko, 22, is working on a film to help people better understand the dangers of social media. She said walking through college campuses these days, she hears of numerous people simply deleting their social media accounts, feeling helpless in their relationship to the online world. 

“Young people shouldn’t be left to fend for themselves,” Gulko said. “Young people should really be given the tools to succeed online, and that’s something a lot of us are doing.” 

Adam Billen, vice president of public policy at Encode, speaking at the eventImage Credits:Tanel Leigher, Responsible Tech Youth Power Fund

And they want a seat at the table to help bring change

Leo Wu, 21, remembers the exact moment that led him to start his nonprofit, AI Consensus.

It was back in 2023 when hype around ChatGPT was becoming widespread. “There was all this press from universities and media outlets about how it was destroying education,” Wu told TechCrunch. “And we just had this feeling that this was not at all the way, the attitude to take.” 

So he launched AI Consensus, which works with students, tech companies, and educational institutions to talk about the best ways students can use AI in school. 

“Is it a teenager’s fault for being addicted to Instagram?” Wu told us, capturing what many young people felt when asked. “Or is it the fault of a company that is making this technology addictive?” 

Wu wants to help students learn how to work with AI while still learning how to think for themselves.

Working to push regulation was the main way the attendees we spoke to were looking to advocate for themselves. Some were, however, building their own organizations, putting the youth perspective at the forefront. 

“I see youth as the bridge between our current government and what the responsible tech future is,” said Jennifer Wang, the founder of Paragon, which connects students with governments looking for perspectives on tech policy issues. 

Meanwhile, Generation Patient’s Dave is pushing for more collaboration between the FDA and FTC. She’s also working to help pass a bill through Congress to protect patients from deceptive drug ads online. 

Encode’s Billen said he’s considering supporting bills in various states that will require disclosure boxes so people know they are talking to AI and not a human, as well as ones like the bill in California, looking to ban minors from using chatbots. He’s watching the Character.AI lawsuit closely, saying a verdict in that case would be a landmark in shaping future AI regulation. 

His company, Encode, along with others in the tech policy space, filed an amicus brief in support of the mother suing Character.AI over the alleged role it played in her son’s death. 

At one point during the event, Prince Harry sat next to Wu to talk about the opportunities and dangers of AI. They spoke about the need for more accountability and who had the power to push for change. That solution was clear. 

“The people in this room,” Wu said. 

Chinese AI startup Manus reportedly gets funding from Benchmark at $500M valuation

Chinese startup Manus AI, which works on building tools related to AI agents, has picked up $75 million in a funding round led by Benchmark at a roughly $500 million valuation, according to Bloomberg.

The company will use the money to expand to new markets, including the U.S., Japan, and the Middle East, Bloomberg noted, citing people familiar with the matter.

Bloomberg’s report suggests that the fresh round has quintupled the valuation of Manus, which previously raised somewhere north of $10 million from backers including Tencent and HSG (formerly Sequoia China).

Manus came into the spotlight in March when the company launched a demo of a general AI agent that could complete various tasks. (In TechCrunch’s testing, it didn’t work quite as well as advertised.) The company later launched paid subscription plans costing between $39 per month and $199 per month.

Business Directory

Sorry, no listings were found.