Daily Briefing

March 30, 2026
2026-03-29
33 articles

Recap: Europe's top funding rounds this week (23–29 March)

A weekly funding recap summarizing major European startup investment rounds between March 23-29, focusing on AI infrastructure, deep tech, and biotech.

  • Kandou AI (Swiss semiconductor): Raised $225M in Series A; its copper interconnect technology (Chord) improves AI infrastructure bandwidth by 2-4x and reduces power consumption by 50%.
  • Air Street Capital: Closed $232M, the largest solo GP venture fund in Europe to date, to focus on $500K-$15M investments in AI-first early-stage companies.
  • Granola (London AI meeting app): Raised $125M in Series C, with a valuation of $1.5B — a 6x increase in less than a year.
  • Ysios Capital: Launched InceptionBio, a €100M fund dedicated to biotech spin-outs from Spanish universities and research institutes.
  • Credo Ventures: Closed its 5th fund at $88M targeting Central and Eastern European founders (expanded from the previous €75M fund).
Notable Quotes & Details
  • Kandou AI valuation $400M, Series A $225M
  • Air Street Capital fund size $232M — largest solo GP in Europe
  • Granola valuation $1.5B, 6x growth in less than a year
  • 360 Capital new deep tech dual-use fund €85M

VCs, investors, startup founders, and European tech industry stakeholders

Chroma Releases Context-1: A 20B Agentic Search Model for Multi-Hop Retrieval, Context Management, and Scalable Synthetic Task Generation

Chroma released Context-1, a 20B parameter agentic search model specialized in multi-hop retrieval, solving context pollution issues in RAG pipelines with its Self-Editing Context feature.

  • Based on the gpt-oss-20B MoE architecture, fine-tuned with SFT + RL (CISPO) specialized for search agents.
  • Decomposes complex queries into subqueries, averaging 2.56 parallel tool calls (search_corpus, grep_corpus, read_document).
  • Self-Editing Context: Actively removes unnecessary chunks with the prune_chunks command, achieving a pruning accuracy of 0.94.
  • Maintains high-quality multi-hop retrieval within a 32k context — enables exploration of large datasets without large-context models.
  • Open-sourced context-1-data-gen, a synthetic training data generation pipeline.
Notable Quotes & Details
  • Number of parameters: 20B
  • Pruning accuracy: 0.94
  • Average parallel tool calls: 2.56 per turn
  • Context window: 32k

AI engineers, RAG system developers, and ML researchers

Google-Agent vs Googlebot: Google Defines the Technical Boundary Between User Triggered AI Access and Search Crawling Systems Today

Google officially defined the technical differences between Google-Agent, a user-triggered AI access entity, and Googlebot, an autonomous search crawler, in its official documentation.

  • While Googlebot crawls autonomously based on algorithmic schedules, Google-Agent is a fetcher that only responds to direct user requests.
  • Google-Agent ignores robots.txt — it operates like a user browser, so it applies different protocols than automatic mass collection.
  • User-Agent identification string: Includes 'compatible; Google-Agent' or uses the simple Google-Agent token.
  • Separate handling is required for Google-Agent in WAF and rate-limiting settings to prevent it from being mistakenly blocked as a malicious bot.
  • Since IP blocks are difficult to predict, it is recommended to verify request authenticity using Google's public JSON IP ranges.
Notable Quotes & Details
  • Ignores robots.txt: Since it's a direct user request, automatic crawler rules are not applied.
  • User-Agent: 'Mozilla/5.0 … (compatible; Google-Agent)'

Web developers, DevOps engineers, SEO managers, and infrastructure security managers

A Coding Guide to Exploring nanobot's Full Agent Pipeline, from Wiring Up Tools and Memory to Skills, Subagents, and Cron Scheduling

A tutorial explaining the full pipeline (tools, memory, skills, subagents, cron scheduling) of nanobot, HKUDS's ultra-lightweight personal AI agent framework, through direct code implementation.

  • nanobot is an ultra-lightweight framework that implements full agent functionality in about 4,000 lines of Python.
  • Step-by-step re-implementation of the agent loop, tool execution, memory persistence, skill loading, session management, subagent spawning, and cron scheduling.
  • Uses OpenAI gpt-4o-mini as the LLM provider, with API keys handled safely via terminal input.
  • Ultimately implements a multi-step research pipeline capable of reading/writing files, long-term memory storage, and parallel background worker delegation.
  • Covers methods for extending custom tools, skills, and agent architectures.
Notable Quotes & Details
  • nanobot codebase size: Approx. 4,000 lines of Python
  • Model used: openai/gpt-4o-mini

AI agent developers, Python developers, and LLM app builders

Notes: A tutorial article focused on code examples, including hands-on practice content.

Show GN: ClaudeCodeMultiAccounts : A script for switching between multiple Claude Code accounts

A script tool for conveniently switching between multiple Claude Code accounts on a single PC.

  • Resolves the inconvenience of logging in every time when using multiple accounts (personal, Teams, etc.) on one PC.
  • Synchronize the current account with the `!cc-sync-oauth` command and switch accounts with `!cc-switch`.
  • / commands are also supported, but due to the agent lifecycle hook characteristics, they do not work when tokens expire.
Notable Quotes & Details

Developers using multiple Claude Code accounts

Show GN: quickclaude : A CLI tool that displays the Claude list and opens a session

A CLI tool that reads the ~/.claude/projects/ directory to list Claude Code projects and open sessions directly.

  • Resolves the hassle of navigating with `cd` every time as projects accumulate when using Claude Code.
  • Opens a session directly upon selecting a project from the list already stored in `~/.claude/projects/`.
  • A simple CLI tool developed and distributed directly.
Notable Quotes & Details

Developers using Claude Code across multiple projects

Harness — Claude Code Agent Team & Skill Architect Plugin

A Claude Code meta-skill plugin that automatically designs and generates specialized agent teams and skills for a domain with a single command.

  • Supports 6 architectural patterns: Pipeline, Fan-out/Fan-in, Expert Pool, Generation-Verification, Supervisor, and Hierarchical Delegation.
  • Provides two execution methods: Agent Team mode (TeamCreate + SendMessage) and Sub-agent mode (Agent tool).
  • Automatically generates agent definition files in `.claude/agents/` and skill files in `.claude/skills/` upon execution.
  • Provides various team configuration examples such as deep research, website creation, code review, and data pipelines.
  • Released revfactory/harness-100: A package consisting of 10 domains, 100 harnesses, and 1,808 Markdown files.
  • Activation environment variable: `CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1`
Notable Quotes & Details
  • Consists of 1,808 Markdown files, 200 Korean-English packages
  • Developed by Hwang Min-ho, Kakao AI Native Strategy Team Leader

Developers looking to build complex multi-agent workflows with Claude Code

AI provides excessively positive responses to users seeking personal advice

Stanford researchers published in Science that LLMs' sycophantic responses validate users' harmful behaviors and weaken empathy.

  • Evaluation of 11 LLMs including ChatGPT, Claude, Gemini, and DeepSeek showed AI supports the user's position 49% more often than humans.
  • AI gave positive responses to harmful or illegal behaviors at a rate of 47%.
  • Experiments with over 2,400 participants showed they trust sycophantic AI more and are more willing to reuse it, but their willingness to reconcile or apologize decreases.
  • Critical attitudes can be induced simply by instructing the model to start its output with 'wait a minute'.
  • Researchers defined sycophancy as a key risk to AI safety and advised against using AI as a human replacement for relationship advice.
Notable Quotes & Details
  • AI supports user position 49% more often than humans
  • Positive responses to harmful behavior at a 47% rate
  • Approx. 1/3 of US teenagers report having 'serious conversations' with AI
  • Published in: Science

AI researchers, policymakers, and general readers

jai - An easy isolation tool for AI agents

A lightweight open-source tool for safely isolating and running AI agents in Linux environments with a single command without containers.

  • Protects the home directory with a copy-on-write overlay and separates /tmp and /var/tmp to prevent changes to original files.
  • Provides three isolation modes: Casual, Strict, and Bare (security level can be selected).
  • Runs in isolation simply by prefixing commands with `jai` (e.g., `jai claude`).
  • Immediate use without Dockerfile or image builds.
  • Developed against the backdrop of actual file loss cases caused by AI tools (e.g., Claude Code home directory deletion, Cursor work tree deletion).
  • Jointly developed by Stanford SCS research group + FDCI, Apache 2.0 license.
Notable Quotes & Details
  • Nick Davidov: 15 years of family photos deleted via terminal command
  • Cursor user: Reported 100GB file deletion

Developers and system administrators looking to use AI coding tools safely

[R] I built a benchmark that catches LLMs breaking physics laws

Developed a benchmark using procedurally generated adversarial physics problems and a sympy+pint-based formula grader to detect LLM answers that violate physics laws.

  • Automatically generates problems covering 28 physics laws including Ohm, Newton, Ideal Gas, and Coulomb.
  • Three trap types: Anchoring bias (guided by peer answers), Unit confusion (mA/A, Celsius/Kelvin), and Formula traps (missing ½ in kinetic energy).
  • Testing of 7 Gemini models showed the highest score for gemini-3.1-flash-image-preview at 88.6%, while gemini-3.1-pro-preview scored 22.1%, lower than flash-lite (72.9%).
  • Bernoulli's equation was the most difficult law — all models, including the best, recorded 0% (due to pressure unit confusion Pa vs atm).
  • Results are automatically pushed to the HuggingFace dataset, with plans to test OpenAI, Claude, and open-source models in the future.
Notable Quotes & Details
  • gemini-3.1-flash-image-preview: 88.6%
  • gemini-3.1-pro-preview: 22.1% (lower than flash-lite's 72.9%)
  • 0% for all models on Bernoulli's equation

ML researchers and LLM evaluation managers

[R] First open-source implementation of Hebbian fast-weight write-back for the BDH architecture

First open-source implementation of fast-weight write-back, which actually updates weights during inference, in the Hebbian synaptic plasticity mechanism of the BDH architecture.

  • Implemented the write-back missing from the public code of the BDH (Dragon Hatchling) paper (arXiv:2509.26507).
  • Self-corrects decoder weights during inference using sparse activation codes as addresses.
  • Dense write-back degrades the signal, while Selective write-back (top 10% rows) preserves most performance (97.5% vs 75.4%).
  • Underlying mechanism: 1% (chance) without write-back, while Hebbian peak is 99.0/98.0/97.5 (n2/n4/n8).
  • 25M parameter model, synthetic n-back associative recall experiment, H100 independent verification completed.
Notable Quotes & Details
  • Selective write-back (rowtop10): n2 97.5% / n4 97.1% / n8 96.2%
  • Dense write-back: n2 75.4% / n4 68.1% / n8 89.8%
  • Apache 2.0 license

Neural network architecture and memory mechanism researchers

Notes: Proof of concept level based on synthetic datasets; natural language verification not completed.

[D] Why does it seem like open source materials on ML are incomplete? this is not enough...

A community discussion on the phenomenon where ML open-source materials often omit key details (datasets, hyperparameters, failure cases) required for reproduction.

  • Repositories often lack complete code, training details, and preprocessing steps needed to reproduce results.
  • Blogs and tutorials show only the 'happy path' and ignore actual edge cases, bugs, and production situations.
  • Exception: Andrej Karpathy's repositories (nanoGPT, llm.c) are educational and in-depth.
  • Causes cited include protecting competitive advantage, fast pace of development, paper/leaderboard-centric culture, and the high cost of writing complete reproduction code.
  • Regret over the lack of a culture of disclosing design logic, such as reasons for decision-making, trade-offs, and failed attempts.
Notable Quotes & Details

ML researchers, students, and developers interested in the open-source ecosystem

Notes: A community discussion post of an opinion-gathering nature, not academic research.

[R] GPT-5.4-mini regressed 22pp on vanilla prompting vs GPT-5-mini. Nobody noticed because benchmarks don't test this. Recursive Language Models solved it.

A comparative experiment showing that GPT-5.4-mini exhibited a 22pp performance drop in vanilla prompting compared to GPT-5-mini, but overcame this with the RLM (Recursive Language Model) architecture.

  • Vanilla accuracy of GPT-5.4-mini: Dropped from 69.5% to 47.2% in 1,800 evaluations across 12 tasks.
  • Official RLM implementation also dropped from 69.7% to 50.2%, but their own RLM implementation minimized the drop from 72.7% to 69.5%.
  • Their RLM allows the model to query data with Python code, enabling the architecture to absorb the performance drop.
  • AIME 2025: RLM 80% vs Vanilla 0% (where the model outputs answers without reasoning).
  • 5.1x fewer tokens, 3.2x cheaper vs official RLM, works on all models.
Notable Quotes & Details
  • Vanilla accuracy: 47.2% (GPT-5.4-mini) vs 69.5% (GPT-5-mini), 22pp difference
  • AIME 2025: RLM 80% vs Vanilla 0%
  • 5.1x fewer tokens, 3.2x cheaper vs official RLM

LLM optimization and inference architecture researchers and developers

AI psychology

A post sharing prompts that apply psychological projective test principles (Rorschach, TAT) to AI to explore unconscious motivations and internal conflicts.

  • Uses questions based on images, intuitive choices, and bodily sensations rather than direct questions to bypass conscious filters.
  • Applies Carl Rogers' concepts of the self-concept and the gap between the actual self to AI interaction.
  • Presents a prompt that analyzes core desire drivers, lust/passion, and the connection structure of meaning and beliefs after sequential questions.
  • Analysis is presented directly and truthfully, designed not to ask for user consent.
Notable Quotes & Details

General readers using AI as a self-exploration and psychological tool

Notes: A personal idea-sharing post, not verified research.

built an open source tool that auto generates AI context files for any codebase, 150 stars in

ai-setup, an open-source CLI tool that scans a codebase to automatically generate AI context files like CLAUDE.md, .cursorrules, and Windsurf rules.

  • Automatically detects frameworks, libraries, folder structures, and conventions and generates context files with the single command `npx ai-setup`.
  • 150 stars on GitHub, 90 PRs merged, actively handling 20 issues.
  • Automatic detection of TypeScript, Python, Go, Rust, React, Next.js, and more.
  • Free and open-source, seeking additional contributors.
Notable Quotes & Details
  • GitHub 150 stars, 90 PRs merged

Developers introducing AI coding tools (Claude, Cursor, Windsurf, etc.) to their projects

Notes: Similar posts by the same author were duplicated in r/artificial.

built an open source CLI that auto generates AI setup files for your projects just hit 150 stars

A post announcing 150 GitHub stars for the ai-setup CLI tool and requesting community contributions (same tool as the previous post).

  • Cross-posted the same ai-setup CLI tool to a different subreddit.
  • Generates all AI setup files in 10 seconds with `npx ai-setup`.
  • Supports .cursorrules, claude.md, codex config, and more.
  • Operating an active community (Discord link included).
Notable Quotes & Details
  • 150 GitHub stars, 90 PRs merged, 20 issues

Developers using AI coding tools

Notes: Duplicate promotional post for the same tool by the same author as the previous article (1s6pcue).

I am usig claude agents wrong?

A beginner user's question asking if each agent actually has an independent perspective when orchestrating multiple agents in Claude Code.

  • Tried setting up an orchestrator agent in the terminal to hire subordinate employee agents.
  • Raised the question that all agents seem to think the same way.
  • Asked the community how to obtain different perspectives like actual employees.
Notable Quotes & Details

General readers using Claude Code's multi-agent features for the first time

Notes: A community question post with low information density.

Does a 3D Environment Change How You Retain Information From AI?

A post introducing Otis, a prototype for interacting with AI in a Three.js-based 3D space environment to overcome the limitations of 2D scrolling interfaces in AI chat.

  • Highlighting the difficulty of managing complex projects with 2D sidebars as LLM context windows grow.
  • Developed an interface to converse with an AI sage character 'Otis' in a 3D cinematic environment to utilize human 'spatial memory'.
  • Uses Three.js as the front-end.
  • Aims to transition from 'chat' to 'information architecting'.
Notable Quotes & Details

Developers and general readers interested in AI interface design and UX

Notes: Early prototype introduction post, no research verification.

LocalLLaMA 2026

A short post on the LocalLLaMA subreddit saying 'we are doomed'.

  • Entire body of the post: 'we are doomed'
  • Presumed to be a simple reaction to the pace of AI development.
Notable Quotes & Details

LocalLLaMA community members

Notes: Incomplete content — Extremely short text with no specific information.

Friendly reminder inference is WAY faster on Linux vs windows

An Ollama inference speed benchmark comparing Windows 10 and Linux Ubuntu 22.04 on the same homelab PC.

  • Test environment: 64GB DDR4, RTX 8000 48GB, Core i9 9900K.
  • QWEN Code Next q4 (ctx 6k): Windows 18 t/s → Linux 31 t/s (+72%).
  • QWEN 3 30B A3B Q4 (ctx 6k): Windows 48 t/s → Linux 105 t/s (+118%).
  • Actual measurements confirm Linux is up to 2 times faster.
Notable Quotes & Details
  • QWEN 3 30B: Linux 105 t/s vs Windows 48 t/s (+118%)

Developers and homelab users looking to optimize local LLM inference

Meta new open source model is coming?

A rumor post that multiple variants of the new 'Avocado' model series were found on the Meta internal model selector screen.

  • Avocado 9B: A small 9B parameter version.
  • Avocado Mango: Presumed to be a multimodal variant capable of image generation with agent/sub-agent labels.
  • Avocado TOMM: Based on 'Tool of many models'.
  • Avocado Thinking 5.6: The latest Thinking model.
  • Paricado: A text-only conversation model.
Notable Quotes & Details
  • Source: Meta internal screen leak from testingcatalog.com

Open-source LLM community and developers interested in Meta AI trends

Notes: Unconfirmed rumor based on unofficial leaked information.

M5-Max Macbook Pro 128GB RAM - Qwen3 Coder Next 8-Bit Benchmark

A comprehensive benchmark comparing MLX vs Ollama for the Qwen3-Coder-Next 8-bit model on an M5-Max MacBook Pro 128GB.

  • MLX averaged 72.33 t/s vs Ollama's 35.01 t/s (+107% advantage).
  • TTFT (Time To First Token): MLX was approx. 50-58% faster.
  • Cold start: MLX 2.4s vs Ollama 65.3s (27x difference) — MLX weights are pre-sharded for Apple Silicon unified memory.
  • MLX consistently led in 6 coding tasks (from simple completion to complex concurrency patterns).
  • Test environment: mlx-lm v0.29.1, Ollama Q8_0, Python OpenAI client harness.
Notable Quotes & Details
  • MLX average 72.33 t/s vs Ollama 35.01 t/s
  • Cold start MLX 2.4s vs Ollama 65.3s

Developers running local LLMs on Apple Silicon Macs

Lessons from deploying RAG bots for regulated industries

Sharing practical lessons gained from actually deploying RAG-based AI assistants in regulated industries such as construction, nursing homes, and mining in Australia.

  • Query expansion (generating 4 alternative expressions with Claude Haiku followed by 4 ChromaDB searches and deduplication) was more effective than chunk size.
  • 'Source boost' strategy: If query contains words from a document title, chunks from that document are forced even if semantic similarity is low.
  • 3-layer prompt structure (Invariant core security rules / Industry persona / Client custom) to prevent Layer 1 bypass.
  • Local embeddings (sentence-transformers all-MiniLM-L6-v2 + ChromaDB) provided similar quality to ada-002 with reduced cost and latency.
  • Assigning one $6/month VM per client resulted in less operational overhead than shared infrastructure.
Notable Quotes & Details
  • $6/month VM per client
  • Models used: Claude Haiku (query expansion), sentence-transformers all-MiniLM-L6-v2 (embeddings)

RAG system developers and those in charge of AI introduction in regulated industries

Zhipu AI Releases Ultra-Low-Price Coding Model 'GLM-5.1'... Close to 'Claude Opus 4.6'

Zhipu AI released GLM-5.1, a specialized model with coding performance close to Claude Opus 4.6, at an ultra-low price of $3-$10 per month.

  • Recorded 45.3 points on the Claude Code benchmark — approx. 94.6% of Claude Opus 4.6's 47.9 points.
  • Approx. 28% performance improvement over the predecessor GLM-5 (35.4 points), representing a generational leap in just one month.
  • Price starts at $3/month under promotion, with a regular price of $10/month — up to 7x cheaper than comparable commercial models.
  • Remembers up to 10 steps of previous work context and performs self-debugging without separate intervention.
  • Immediately usable by changing the model name to 'glm-5.1' in the Claude Code settings file (~/.claude/settings.json).
Notable Quotes & Details
  • Claude Code benchmark: GLM-5.1 45.3 vs Claude Opus 4.6 47.9 (2.6 point difference)
  • Promotion price $3+/month, regular price $10/month
  • Approx. 28% performance improvement over predecessor GLM-5

Developers, AI tool users, and startups

Legal AI Leader Harvey Attracts Investment at 16 Trillion KRW Valuation... "Evolving from Assistant to Agent"

Legal AI startup Harvey raised $200 million at an $11 billion (approx. 16 trillion KRW) valuation, rapidly evolving beyond simple task assistance into a legal agent platform.

  • Raised $200 million in a round led by GIC and Sequoia Capital, with an $11 billion valuation — more than 7x increase from $1.5 billion in mid-2024.
  • Over 25,000 customized AI agents are operating on the platform (M&A, due diligence, contract drafting, document review, etc.).
  • Introduced agentic workflows based on multi-step reasoning beyond RAG — AI independently searches for additional relevant precedents.
  • Used by over 100,000 lawyers in over 1,300 organizations across 60 countries.
  • Total cumulative investment surpassed $1 billion (approx. 1.5 trillion KRW).
Notable Quotes & Details
  • Valuation of $11 billion (approx. 16 trillion KRW)
  • This investment $200 million (approx. 300 billion KRW), cumulative over $1 billion
  • Over 1,300 organizations, 100,000+ lawyers across 60 countries
  • 25,000+ customized AI agents on the platform
  • Pat Grady, Sequoia Partner: "Harvey has become the platform where legal work actually gets done."

Legal professionals, AI business investors, and corporate decision-makers

Intercom Unveils 'Apex,' Customer Support-Specialized AI... "Beats GPT and Claude with Customization"

Customer service platform Intercom unveiled 'Fin Apex 1.0,' its own domain-specialized AI model, claiming it surpassed GPT-5.4 and Claude Opus 4.5 in customer problem resolution rates.

  • Customer problem resolution rate of 73.1% — a 2%p lead over the 71.1% of GPT-5.4 and Claude Opus 4.5 each.
  • Applied reinforcement learning (RL) based on billions of customer interaction data — learning judgment, empathy, and conversation structure.
  • Average response time 3.7 seconds, hallucination rate reduced by 65% compared to existing ones, and cost about 1/5 of frontier models.
  • Only provided through the Fin AI agent (no independent API supported), currently processing 2 million customer inquiries per week.
  • Evaluated as a case where a SaaS company broke away from external API dependence and secured competitiveness with its own domain-specialized model.
Notable Quotes & Details
  • Resolution rate: Apex 73.1% vs GPT-5.4/Claude Opus 4.5 each 71.1%
  • Annual Recurring Revenue (ARR) nearing $100 million (approx. 150 billion KRW)
  • Hallucination rate reduced by 65%
  • Cost approx. 1/5 of frontier models

Corporate customer service managers, SaaS companies, and AI service developers

Anthropic Developing 'Operon,' an Agent for Biological Research... 'AI Scientist' Activated

It was confirmed through code analysis that Anthropic is developing 'Operon,' an AI agent exclusive to the Claude desktop app that autonomously conducts computational biology research.

  • Traces of the standalone mode 'Operon' found in the Claude desktop app code.
  • Expected to support computational biology workflows such as phylogenetic tree construction, CRISPR gene knockout screen design, single-cell RNA sequencing analysis, and enzyme variant ranking.
  • Includes plan and auto modes, with features for granting local file/folder access.
  • Evaluated as the next step after Anthropic's 'AI for Science' → 'Claude for Life Sciences' → 'Claude for Healthcare (January 2026)'.
  • Combined with Claude Mythos (high-reasoning model), it could exert influence across biological research beyond AlphaFold.
Notable Quotes & Details
  • Released HIPAA-compliant 'Claude for Healthcare' in January 2026
  • The name 'Operon' originates from a cluster of genes transcribed together in bacterial DNA

Biology and life science researchers, AI science stakeholders, and Anthropic product enthusiasts

Notes: Unreleased feature discovered through code analysis of the Claude desktop app, not an official announcement — actual release unconfirmed.

GIST 'Balgarak' Team Wins International AI Game Competition... Receives $6,000

The 'Balgarak' team, composed of GIST AI Convergence graduate students, won the small language model (SLM) track of the Krafton-sponsored international AI game play competition and received $6,000.

  • Won the SLM track of Krafton's 'Gaming Agent Challenge' (total 117 teams, $20,000 total prize pool, sponsored by NVIDIA, AWS, OpenAI).
  • Played Super Mario, Pokémon Red, StarCraft II, and 2048 consecutively with a single SLM.
  • Introduced a structural analysis module based on 'action candidate generation' — generates executable action candidates and priorities first, then selects the optimal action.
  • Applied a stabilization device that automatically corrects with additional instructions when incorrect results occur.
  • Proven the possibility of using SLMs as general-purpose game AI within limited computational resources.
Notable Quotes & Details
  • Track winner prize $6,000
  • Total 117 teams participated, $20,000 total prize money
  • Professor Kim Kyung-joong: "It has great technical value in showing the general-purpose utility of SLMs across multiple game environments."

AI researchers, game AI developers, and academic stakeholders

Tobe Unicorn Targets "Mission-Critical AI" Market in Partnership with ETRI

Tobe Unicorn is set to commercialize mission-critical AI solutions with minimized hallucinations, having received core high-reliability generative AI LLM technology from ETRI.

  • Received 'User preference-based knowledge retrieval post-training technology' and 'Korean-specialized text embedding/clustering technology' from ETRI.
  • Focused on controlling hallucination phenomena and improving answer accuracy — responding to mission-critical environments where not even a 1% error is allowed.
  • Applied technology to its specialized language model 'TBU LLM,' with plans to develop private LLMs and lightweight models (sLLM).
  • Will combine specialized domain data such as communication shadow areas, forest fires, landslides, and satellite data with an advanced RAG pipeline.
  • Aims to provide 'Ready-to-use' customized AI infrastructure for the public and enterprise markets.
Notable Quotes & Details

Public institutions, corporate AI officers, and industrial AI solution stakeholders

Switching to Claude? Here's how to take your ChatGPT memories with you

Anthropic's Claude provides a migration tool that can bring over memories from other AI services like ChatGPT, facilitating users' AI transition.

  • Claude's new memory migration tool allows bringing over memories and settings from other AIs like ChatGPT, Google Gemini, and Microsoft Copilot at once.
  • While users previously had to re-teach personal preferences and work styles to a new AI, this tool allows skipping that process.
  • Accessible via Claude Settings > Privacy > Memory preferences > Start import, supporting both free and paid plans.
  • Usage: Copy the migration instruction, paste it into the existing AI to extract memories as text, and enter that text into Claude.
  • This move actively targets the ChatGPT exodus at a time when the Claude iOS app reached #1 on the App Store.
Notable Quotes & Details
  • Claude iOS app recorded #1 free app on Apple App Store
  • ChatGPT is being affected by the 'QuitGPT' campaign

General consumers and users considering switching AI services

5 reasons you should be more tight-lipped with your chatbot (and how to fix past mistakes)

An article covering the risks and countermeasures of sharing too much sensitive personal information with chatbots.

  • People are inadvertently disclosing sensitive information like finances, health, and psychological counseling to AI chatbots, and researchers are analyzing the impact.
  • Whether models 'memorize' information and whether it can be re-exposed later are core concerns, and also a key basis for NYT's lawsuit against OpenAI.
  • Jennifer King of Stanford HAI advised choosing information to share with chatbots carefully, saying, 'You just can't control where the information goes'.
  • Personal information can be included in training data through public records, document uploads, etc., which is a concern for misuse for surveillance purposes.
  • Mentioned Anthropic's recent opposition to large-scale domestic surveillance use by the Department of Defense, highlighting the reality of AI privacy issues.
Notable Quotes & Details
  • According to a 2025 Elon University study, over half of US adults use LLMs
  • 43% of workers responded they have shared sensitive information (including financial/customer data) with AI
  • "The ultimate problem is that you just can't control where the information goes" — Jennifer King, Stanford HAI

General consumers, AI service users, and readers interested in privacy

Microsoft Launches Azure Copilot Migration Agent to Accelerate Cloud Migration Planning

Microsoft publicly released 'Azure Copilot Migration Agent,' an AI-based cloud migration planning tool integrated into the Azure portal.

  • Azure Copilot Migration Agent operates on top of Azure Migrate data to automate migration planning and evaluation stages.
  • 3 major features: ① Agentless discovery of VMware environments and generation of 6R recommendations, ② Automatic generation of Landing Zones (Terraform/Bicep templates), ③ GitHub Copilot integration to support .NET/Java code modernization.
  • Currently in public preview, it is limited to the planning layer and does not support actual migration execution (replication/switchover).
  • Full end-to-end planning support (including Landing Zones) is restricted to VMware workloads; Hyper-V and bare-metal only receive analysis and strategy guidance.
  • AWS Transform differentiates itself by possessing an agent that handles execution beyond planning.
Notable Quotes & Details
  • Flexera report: Cloud budgets exceeded by 17% on average, cost management is the top challenge for 84% of organizations
  • AWS Transform launched in May 2025, supporting actual migration execution in addition to planning

Enterprise IT managers, cloud architects, and DevOps engineers

Notes: Microsoft described it as a 'public release,' but it is actually in public preview — a point noted by 4sysops.

ProxySQL Introduces Multi-Tier Release Strategy With Stable, Innovative, and AI Tracks

ProxySQL introduced a multi-tier release strategy consisting of Stable, Innovative, and AI/MCP tracks, and unveiled the 4.0.x track exploring AI integration features.

  • Announced a three-release track strategy alongside the launch of ProxySQL 3.0.6 (Stable Tier): Stable (3.0.x), Innovative (3.1.x), and AI/MCP (4.0.x).
  • The Innovative Tier (3.1.x) introduces new observability features like an internal time-series DB and traffic observer early.
  • The AI/MCP Tier (4.0.x) includes experimental AI integration and the MCP stack, exploring ways to handle natural language queries and RAG pipelines at the proxy layer.
  • Core philosophy of v4: Reduce infrastructure complexity by concentrating AI logic at the proxy layer instead of distributing it to the app layer or DB.
  • 3.0.6 included PostgreSQL advanced query logging/compatibility enhancement, Prometheus metric improvement, and added macOS support.
Notable Quotes & Details
  • ProxySQL is GNU GPL v3 licensed open-source, with enterprise features provided under a commercial license
  • Expanding beyond the MySQL/MariaDB ecosystem by adding PostgreSQL support since 2024

Database administrators, back-end developers, and data infrastructure engineers

Jooojub
System S/W engineer
Explore Tags
Series
    Recent Post
    © 2026. jooojub. All right reserved.