Monthly Archives: March 2026

Cybersecurity Heavyweights Launch JetStream with $34M Seed Round to Bring Governance to Enterprise AI

Backed by Redpoint Ventures, CrowdStrike Falcon Fund, CrowdStrike CEO George Kurtz, Wiz CEO Assaf Rappaport, and Okta Vice-Chairman Frederic Kerrest, the company was founded by veteran security operators with a mission to accelerate enterprise AI success

JetStream Security, a new company founded by veteran security operators from CrowdStrike, Dazz, SentinelOne, Cohesity, McAfee, and Attivo Networks, today launched its AI governance platform. JetStream raised $34 million in a massively over-subscribed seed round led by Redpoint Ventures, with participation from the CrowdStrike Falcon Fund. Cybersecurity luminaries like George Kurtz (CrowdStrike), Assaf Rappaport (Wiz), and Frederic Kerrest (Okta) are some of its blue-chip angel investors.

Companies are racing to deploy AI agents, bots, applications, MCP Servers, and custom built models that they don’t fully understand, can’t clearly monitor, and struggle to control. Most technology leaders still can’t answer basic questions: what data is being accessed, how AI systems behave, who is accountable when there’s an AI incident, or what they really cost. When AI scales faster than governance, risk becomes the adoption constraint. 

Today, 93% of executives experience challenges with implementing AI governance and security guardrails, indicating how AI controls are ripe for innovation. At the same time, expectations are rising: more than 80% of CEOs are increasingly optimistic about the ROI of their AI investments, yet half believe their own jobs are at risk if those investments fail to deliver.

JetStream brings clarity to this chaos, giving enterprises unified visibility and control, so that AI becomes a strategic asset, not a hidden liability. The company’s thesis is that AI is ready for takeoff – but trust in AI remains nascent. Lack of trust is the main blocker to wider adoption and the reason so many organizations experience difficulty moving from pilot to production. Trust requires governance and security control capabilities that span the entire AI lifecycle. Trust is what enables leadership to give the green light for the production use of AI.

At the core of the platform is JetStream AI Blueprints™, which are dynamic, system-generated graphs of all the resources working toward a shared goal. They show how AI operates in an environment at any moment. Each Blueprint maps the relationships among agents, the models they use, the data they access, the tools they call, and the identities behind every action, whether human, agentic, or non‑human. Blueprints differentiate with real runtime behavior tracking rather than static diagrams. They flag deviations from the authorized purpose, but can be updated to reflect new changes through an authorization workflow. Blueprints also track the cost of each workflow, showing what every agent run costs and who is responsible for that spend. In short, a Blueprint is the operational contract for an AI workflow and serves as a single source of truth for all your AI assets. It makes behavior and cost visible, attributable, and governable.

“AI is moving faster than most organizations can manage,” said Raj Rajamani, CEO and co-founder of JetStream. “Leaders are being asked to bet their businesses and careers on their systems they can’t fully see, explain, or control. That’s where trust breaks down. With AI Blueprints™, we give teams a clear, practical way to understand what their AI is doing, manage risk in real time, and move from experimentation to production with confidence. Our goal is simple: help companies scale AI responsibly, without slowing innovation.”

JetStream’s mission is to accelerate enterprise AI adoption by delivering a governance‑grade inspection and control layer that helps enterprises see, understand, and manage how agents operate across the enterprise. JetStream establishes agentic identity governance and controls for virtual workforces while maintaining agent‑level cost controls without slowing innovation.

“The pace of AI innovation is moving faster than most enterprises can safely absorb,” said Erica Brescia, Managing Director at Redpoint Ventures, a JetStream investor. “What stood out to us about JetStream is not just the product as an answer to major challenges, but also the team behind it. These are operators who’ve previously been ahead of every major security shift, and we trust them to stay ahead as agentic AI reshapes how organizations operate.” 

JetStream’s founders have led product, engineering, and go-to-market functions from seed to IPO and beyond at some of the most influential security companies of the last decade, including CrowdStrike, SentinelOne, Cohesity, Attivo Networks, Cylance, and McAfee. They have built platforms that protect the world’s largest enterprises, scaled organizations through hyper-growth, and navigated the security challenges that emerge each time a new technology paradigm takes hold.

JetStream’s seed round closed in a matter of weeks, reflecting strong demand from both investors and enterprises. The company is already working with F500 organizations, and plans to expand rapidly across engineering, product, and go-to-market teams.

AI News Briefs BULLETIN BOARD for March 2026

Welcome to the AI News Briefs Bulletin Board, a timely new channel bringing you the latest industry insights and perspectives surrounding the field of AI including deep learning, large language models, generative AI, and transformers. I am working tirelessly to dig up the most timely and curious tidbits underlying the day’s most popular technologies. I know this field is advancing rapidly and I want to bring you a regular resource to keep you informed and state-of-the-art. The news bites are constantly being added in reverse date order (most recent on top). With the bulletin board you can check back often to see what’s happening in our rapidly accelerating industry. Click HERE to check out previous “AI News Briefs” round-ups.

[3/3/2026] Andrew Ng Says AGI Is Decades Away—and the Real AI Bubble Risk Is in the Training Layer – Andrew Ng, founder of DeepLearning.AI and Coursera, executive chairman of Landing AI, and founding lead of the Google Brain team, says that AI capable of performing the full breadth of human intellectual tasks remains decades away. He recently appeared in an interview where he discussed enterprise adoption of agentic AI, whether AI is in a bubble, the AI infrastructure build-outs, geopolitical fragmentation and its effects on global AI strategy, and more. This post contains a transcript of that interview.

[3/3/2026] Why XML Tags Are so Fundamental to Claude – Structuring prompts with XML can be a transformative experience in Claude. The AI’s framework specifically incorporates XML tags as key elements. The repurposing of XML technology may be a core aspect of what makes Claude distinctive. It gives Claude the ability to distinguish between the transition from first-order to second-order expressions, which is a mechanism fundamentally required for information transfer between any two entities. Claude’s awareness of the concept of delimiters is crucial to every processing and communication of information, and it is this capacity that makes Claude so effective at interpreting layered meaning.

[3/3/2026] Alibaba’s small, open source Qwen3.5-9B beats OpenAI’s gpt-oss-120B and can run on standard laptops – Alibaba recently unveiled its Qwen3.5 Small Model Series. Qwen3.5-0.8B and 2B are intended for prototyping and deployment on edge devices where battery life is paramount. Qwen3.5-4B is a strong multimodal base for lightweight agents with a 262,144 token context window. Qwen3.5-9B is a compact reasoning model that outperforms OpenAI’s open source gpt-oss-120B on key third-party benchmarks. The weights for the models are available now under Apache 2.0 licenses on Hugging Face and ModelScope.

[3/3/2026] Anthropic vs. White House puts $60 billion at risk – The $60 billion investment into Anthropic from over 200 venture capital investors is now at risk due to a contract dispute with the Pentagon. Anthropic being designated a supply chain risk is unprecedented. It will prevent other military contractors from deploying Claude in their applications. It could also require companies like NVIDIA, which do business with the US military, to sever their commercial activities with Anthropic. The situation is becoming an existential moment for all of American AI and its investors.

[3/2/2026] The #1 misconception I see in beginner data science: correlation = causation. I teach it this way: “Correlation helps you find where to look. Causation tells you what to do.” Big difference.

[3/2/2026] MIT just mass released their Al library for free! – I love MIT Press books, and I use many of them regularly. Most people pay thousands for bootcamps that teach half of this. Bookmark it. Start anywhere. Just start.

1. Foundations of Machine Learning

2. Understanding Deep Learning

3. Algorithms for ML

4. Deep Learning

5. RL Basics (Sutton & Barto)

6. Distributional RL

7. Multi-Agent Systems

8. Long Game Al

9. Fairness in ML

10. Probabilistic ML (Part 1)

11. Probabilistic ML (Part 2)

[3/2/2026] Marc Andreessen: The real AI boom hasn’t even started yet – Featured on Lenny’s Podcast, Marc Andreessen is a founder, investor, and co-founder of Netscape, as well as co-founder of the venture capital firm Andreessen Horowitz (a16z). The conversation digs into why we’re living through a unique and one of the most incredible times in history, and what comes next.

[3/2/2026] Sakana AI releases open Doc-to-LoRA and Text-to-LoRA, generating LoRA adapters in a single forward pass – Sakana AI introduces Doc-to-LoRA and Text-to-LoRA, two systems that let you update large language models without running a new fine-tuning job.

Instead of retraining a model or stuffing long documents into the prompt, you train a separate model called a hypernetwork once. That hypernetwork generates small weight updates called LoRA adapters in a single forward pass. You attach the adapter to a frozen base model and get new knowledge or new skills instantly.

Key details:

  • Avoids expensive fine-tuning pipelines for each new task
  • Removes repeated long-document prompts and high memory use
  • Cuts per-update latency to sub-second adapter generation
  • Near-perfect zero-shot accuracy beyond 4× base context window
  • 75.03% accuracy on Imagenette via visual-to-text weight transfer
  • Matches performance of adapters trained on 9 benchmark tasks
  • Train the hypernetwork once on representative tasks
  • Provide a document or task description at deployment
  • Generate a LoRA adapter and attach it to your base model

[3/2/2026] Anthropic introduces Claude Import Memory to transfer context from ChatGPT and Gemini – Instead of retraining a new assistant about your stack, tone, and ongoing projects, you transfer that information in one step. Claude stores it as persistent memory, which means it saves details across sessions and reuses them later. This solves a common problem: starting from zero every time you try a new model.

Key details:

  • Imports preferences, workflows, and project context in one copy-paste
  • Updates Claude’s long-term memory instantly
  • Reduces repeated setup across conversations
  • Maintains continuity across tools
  • Works on all paid Claude plans
  • Copy Anthropic’s import prompt.
  • Paste it into your current assistant.
  • Generate the memory summary.
  • Open Claude → Settings → Memory.
  • Paste and save.

Claude then responds using your imported context in future chats.

doubleAI’s WarpSpeed Beats a Decade of Expert-Engineered GPU Kernels — Every Single One of Them

AI system achieves 3.6x average speedup over human experts across all tested algorithms and GPU architectures, marking the arrival of Artificial Expert Intelligence

doubleAI today announced WarpSpeed, the first Artificial Expert system to autonomously surpass world-class human experts in GPU performance engineering. WarpSpeed rewrote and re-optimized every kernel in NVIDIA’s cuGraph library — one of the most widely used GPU-accelerated graph analytics libraries in the world — delivering a 3.6x average speedup over a decade of expert-tuned code. The hyper-optimized library is now available on GitHub as a drop-in replacement requiring no code changes.

Key Results

  • 3.6x average speedup over human expert-written kernels
  • 100% of algorithms tested run faster with WarpSpeed
  • 55% of kernels achieve more than 2x improvement
  • Validated across three GPU architectures: NVIDIA A100, L4, and A10G

Why This Matters

cuGraph has been built and continuously refined by some of the world’s top GPU performance engineers over roughly a decade. It spans dozens of graph algorithms, each hand-optimized for maximum throughput. WarpSpeed beat every single one of them — on every tested GPU.

While AI has earned headlines for winning gold medals at the International Mathematical Olympiad and outperforming top programmers on competitive coding platforms like CodeForces, those achievements share three hidden advantages: abundant training data, easy-to-verify solutions, and short reasoning chains. GPU performance engineering breaks all three assumptions simultaneously:

  • Data scarcity: Only a few thousand truly optimized CUDA kernels exist publicly.
  • Validation complexity: Correctness is hard to verify — multiple valid solutions exist, and simple diffs are insufficient.
  • Deep reasoning chains: Optimal performance emerges from a long chain of interacting decisions — memory layout, warp behavior, caching strategy, scheduling, and graph structure.

Even state-of-the-art coding agents, including Claude Code, Codex, and Gemini CLI, fail dramatically in this domain — often producing incorrect implementations even when provided with cuGraph’s own test suite. In testing, leading coding agents produced buggy solutions in approximately 40% of tasks, making them unusable for real-world kernel replacement.

A New Paradigm: Artificial Expert Intelligence (AEI)

WarpSpeed represents the beginning of what doubleAI calls Artificial Expert Intelligence (AEI) — not Artificial General Intelligence (AGI), but something the world may need more urgently: AI systems that reliably surpass human experts in domains where expertise is rarest, slowest to develop, and most valuable.

“The real question isn’t ‘can AI code?’ — it’s ‘can AI become an expert?’” said Prof. Amnon Shashua, cofounder and CEO. “Humanity’s progress is bottlenecked by experts. If we can copy and paste expertise into the world, the impact is transformative.”

The Science Behind WarpSpeed

WarpSpeed’s results stem not from scaling alone, but from new algorithmic ideas developed by doubleAI’s research team:

  • Diligent Learning: A method for efficiently searching in the space of ideas, enabling AI to navigate vast design spaces and converge on high-quality solutions even when training data is scarce.
  • PAC Reasoning and Verification : A methodology for verifying correctness when ground truth is unavailable. Built on two components — Input Generation (IG), which finds challenging test inputs, and Automatic Validation (AV), which determines whether outputs are correct given a problem description. Grounded in the computational insight that verification is simpler than search, this approach allows AI to bootstrap reliable self-verification even in domains where it cannot yet solve the original problems.

These components create a flywheel: better verification engines produce better training data, which train stronger experts, which generate more sophisticated verification — and the cycle continues.

Availability

WarpSpeed-optimized cuGraph kernels are available today on GitHub at https://github.com/double-ai/doubleGraph . Users can install the optimized library with no changes to their existing code.

Looking Ahead

GPU hardware has long outpaced the software that runs on it. Every new architecture ships faster silicon, but the kernels and algorithms underneath lag behind — bottlenecked by the scarcity of engineers who can fully exploit it. WarpSpeed closes that gap: AI that keeps software in lockstep with hardware, unlocking the full potential of modern GPUs and opening the door to use cases that were previously out of reach.

cuGraph is a stress test. If AEI works in a domain where data is scarce, validation is hard, and the baselines are elite, then AEI can work wherever expertise is the bottleneck — from drug discovery and chip design to cybersecurity, robotics, and climate technology.