Context Hub vulnerable to supply chain attacks, says tester

The new AI tool highlights the risk when developers point their bots at non-authoritative information sources, with predictable consequences.

On the surface, the recent critique of a new tool called Context Hub by a developer who created an open-source alternative appears to be an illustration that the tool is vulnerable to misuse. But delve further and it serves as a far greater warning to AI developers of the downside of using non-authoritative sources of information.

Two weeks ago, Andrew Ng, founder of a Silicon Valley technical training firm called DeepLearning.AI, launched the product, which he stated in a LinkedIn post is an open tool that gives a coding agent the up-to-date API documentation it needs.

However, on Wednesday, Mickey Shmueli, the developer of LAP, which he described as an “open source alternative to Context Hub,” released a Context Hub supply chain attack Proof of Concept (PoC) on Github.

He explained the problem he’d discovered: Context Hub contributors submit docs as GitHub pull requests, maintainers merge them, and agents fetch the content on demand, “[but] the pipeline has zero content sanitization at every stage”

Responding to Shmueli’s findings, David Shipley, CEO of Beauceron Security, said Thursday, “[it is] time to have a moment of pure honesty about agentic AI. At its best, it’s a gullible, high-speed idiot that occasionally trips on hallucinogenic mushrooms that you’re giving the ability to act on your behalf. Stop and think about that. Would you knowingly hire a human that fit that description and then give them unsupervised access to code or your personal banking?  I wouldn’t.”

LLM-based generative AI tools, he said, “do not have the capacity for critical thought or reasoning, period. They’re probability math and tokens. They’re faking reasoning by retuning and iterating prompts to reduce the chances of being wrong.” 

That is not critical thinking, Shipley said, noting, “what was true in the 1950s remains true today: Garbage in, garbage out.”

People, he said, “built stochastic parrots that can be manipulated by sweet talking to them, and they call it prompt engineering. Dudes, it’s social engineering. And the more the AI industry keeps telling us about the Emperor’s New Clothes, the dumber we all look for believing them.”

Read the Full Story at InfoWorld

Next
Next

Nova Scotia politician blackmailed by hackers