AI Weekly: GPT-5 Preparations, Claude Code Limits, and Context Rot
Welcome to this week’s AI newsletter! We’re seeing significant movement across the AI landscape with major announcements from OpenAI, Anthropic, and Mistral AI, plus some fascinating new tools and concepts emerging.
🚀 GPT-5 Goes Live: What Matters for Power Users
GPT-5 kicks off a new chapter by blending fast and slow thinking, mastering multi-modal content, and becoming a genuine “AI agent” for complex productivity. If deep integrations and automation are your focus—or you want the latest in smart AI—this is a big leap. But for code-centric power users, keeping Claude 4 in your toolset still makes sense.
Launch Date:
OpenAI is officially launching GPT-5 in early August 2025, bringing several upgrades aimed at both developers and everyday professionals.
Key Features at a Glance
Unified “Smart Mode” Architecture:
GPT-5 doesn’t make users choose between fast but shallow (like older GPT-4o) or deep and reasoned (as in GPT-4.1) responses. Instead, it automatically blends quick thinking and high-level reasoning in a single session, adapting on the fly to what your prompt needs—no manual toggling (does not revert to the “03” logic, but is a new architecture that merges fast and slow approaches seamlessly).Expanded Context Window:
Handles up to 1 million tokens, letting you upload books, codebases, or project archives for discussion—yet beware “context rot:” if your docs aren’t curated, performance can still degrade, so targeted snippets beat giant dumps.Multimodal Mastery:
Natively understands and reasons about text, images, and audio in a unified conversation. For instance: Snap a whiteboard plus upload meeting notes and get actionable next steps.Agentic Capabilities & Integrations:
GPT-5 acts as a smart assistant: It can call APIs, run functions, and automate workflows inside your favorite software (calendar, email, even databases). This is genuine “agent mode”—not just answering queries, but executing tasks for you.
Coding Benchmarks: GPT-5 vs Claude 4
Coding Performance:
On the SWE-bench benchmark, GPT-5 (in preview) hits ~39%, while Claude 4 leads with ~53%—Claude still has the edge for hardcore coding and bug-fixing right now, especially on live repo tasks.Reasoning & Multitask Learning:
GPT-5’s average score on MMLU is a strong 89.6%—a bump up from GPT-4.1’s 86.4%—and excels at multi-step logic, making it trustworthy for legal and technical research.
When Should You Use GPT-5?
YES —
- If you need an AI to take actions not just answer: automate repetitive work, schedule meetings, summarize & file docs.
- When you require high accuracy, multi-modal support, or plan to integrate AI into other tools (via function calling/API).
- For research, data analysis, and professional tasks where context, reliability, and smart automation matter.
SKIP (for now) —
- If you’re a casual user or mainly want quick writing, basic Q&A, or simple coding—GPT-5’s upgrades may outpace your needs and cost more.
- Hardcore coders: Claude 4 is currently stronger on code and repo-level troubleshooting, so consider both if coding is mission critical.
This represents the next major leap in large language model capabilities, potentially setting new benchmarks for AI performance across various tasks.
📊 Anthropic Claude Code Rate Limits: What’s Coming and Why It Counts
From August 28, 2025, Anthropic is adding new weekly usage caps to Claude Code across all paid plans. In addition to 5-hour session resets, users now have hard weekly limits (e.g., 40–80 hours/week for Pro). A small handful of heavy users will need to either pace themselves or buy extra capacity.
Why the change? Some were running non-stop workloads that strained Anthropic’s servers and budget. This move is meant to curb abuse, protect average users’ experience, and keep prices sustainable.
Compared to Cursor:
Cursor recently switched to an API-credits model where costs pile up unpredictably and context limits interrupt big tasks. Users report surprise overage bills and smaller working windows. By contrast, Claude Code’s new limits are clear—you know exactly what you’re paying for with no hidden bills.
User sentiment:
- Many welcome predictable caps and improved stability (“At least I know my max bill”).
- Power users are frustrated by hard stops mid-project and vague usage tracking (“Wish I could see my limit in real time”).
- Some are looking at alternatives, but most agree: better to have clear limits than nasty overages.
Bottom line:
Claude Code’s weekly rate limits are a shift toward transparent, predictable cost control—a big plus over Cursor’s recent changes, even if a few power users have to adapt.
🤝 What the Mistral AI & NTT Data Partnership Means for Scandinavian Fintech
The recent alliance between Mistral AI and NTT Data could reshape the AI landscape for Scandinavian fintechs, offering new advantages in compliance, localization, and AI-powered innovation.
1. Data Sovereignty & Nordic Compliance
Fintechs in Scandinavia face rigorous privacy rules. This partnership promises AI solutions where all customer and transactional data stays within national/EU borders, ensuring full alignment with both EU and Nordic data laws.
2. Region-Specific, Customizable AI
By focusing on open, high-performance models tailored for European markets, Scandinavian fintechs can deploy AI tools optimized for local languages and processes—critical for delivering compliant, regionally relevant services.
3. Secure, Enterprise-Grade Cloud Options
Private and sovereign cloud deployment options bypass reliance on US-based clouds, a common regulatory hurdle in the Nordics. End-to-end services mean even smaller fintechs can quickly implement AI without massive in-house teams.
4. Fast, Compliant Customer Automation
Integration with NTT Data’s AI ecosystem lets fintechs automate onboarding, anti-fraud, and reporting processes—speeding up compliance-driven innovations and audits while keeping transparency high.
Key Benefits & Strategic Points
- Rapid AI Adoption: Scandinavian fintechs can adopt advanced but compliant AI without common deployment delays.
- Sustainability Mindset: Efficient, sustainable AI models resonate with Nordic tech values.
- European Digital Path: Provides a high-quality, non-US alternative for critical AI projects.
Things to Watch
- Integration Hurdles: Tying new AI into legacy systems could be complex.
- Language Accuracy: Success hinges on strong support for Nordic languages.
- Market Focus: The Nordics’ role in early adoption will shape lasting impact.
The partnership offers plug-and-play, compliance-ready, Scandinavian language–compatible AI platforms—giving Nordic fintechs the tools to lead in secure, customer-focused financial services innovation.
📚 ChatGPT Study Mode: Fintech Pros’ Shortcut to Upskilling
ChatGPT Study Mode isn’t just for students—it’s a quick, personalized learning tool that professionals in fintech can use to stay updated on new tech, regulations, and industry changes.
Why Use Study Mode for Fintech?
- Digest Complex Changes: Breaks down compliance updates, tech specs, or case studies step by step, so you learn and remember—not just skim.
- Custom Fits to Projects: Add your own policy docs or code, get auto-generated quizzes or flashcards tailored to your daily work.
- 24/7 Learning: Study at your own pace, any time, from anywhere.
Proceed With Awareness
- Double-Check Facts: Output isn’t always perfect. Cross-reference AI insights with trusted fintech sources.
- Don’t Skip Human Input: For strategy or high-stakes work, keep collaborating with colleagues.
Bottom Line: ChatGPT Study Mode is a fast, customizable way for fintech teams to keep their edge. Use it to break down dense topics and build skills—just make sure to keep critical thinking in the workflow.
See It in Action
- YouTube: ChatGPT Study Mode Demo
(A quick walkthrough of ChatGPT’s guided learning mode showcasing how it supports knowledge checks and interactive prompts.)
🔄 Understanding “Context Rot”
What is “Context Rot” and Why Does it Matter?
Context rot is a term used to describe how AI chatbots and virtual assistants gradually lose track of what matters in a conversation as more information piles up. Picture a chatbot that starts out sharp but, after many back-and-forth replies, gets confused—forgetting important points, mixing things up, or referencing outdated info.
Why does this matter?
- As conversations or documents get longer, AI can start making more mistakes or offer less relevant answers.
- This matters for everyone who uses AI for research, customer service, or any long-term chat: the quality of responses can quietly slide downhill.
- If you know about context rot, you can spot when your AI assistant is “losing the plot”—and then you can reset, clarify, or trim down what you share for sharper answers.
In short: Context rot is a hidden reason why your favorite AI assistant sometimes goes off track. Knowing it exists helps you manage expectations and get better results. If you use (or build) AI tools, you should care—because more information isn’t always better if the AI can’t keep up.
🤖 Automated Browser Control in Cursor MCP
Cursor’s MCP servers now bring true browser automation straight into your workflow—no scripts needed. Just use natural language to tell Cursor to navigate, fill forms, grab screenshots, or scrape data—all automated in your real browser.
Why It Matters
- No more manual repetition: Automate those browser tasks you hate, from test runs to routine data collection.
- Instant feedback: Trigger end-to-end tests, debug frontends, or collect logs right from your IDE—no context switching.
- More power, less friction: Focus on shipping code, not copy-pasting data or clicking buttons repeatedly.
How to Get Automation
- Install the Browser Extension:
Download the Browser MCP (or BrowserTools MCP) extension for Chrome (from GitHub or store). - Add an Automation MCP Server in Cursor:
- Go to Cursor Settings → Features → MCP Servers.
- Click “Add new MCP server” and enter a command, like:
or for Playwright:npx @agentdeskai/browser-tools-mcp@latest
npx @playwright/mcp@latest
- Save, refresh, and look for a green status light.
- Start Automating:
In Cursor’s chat, type commands like:Go to google.com and search for "Browser MCP”
Cursor handles the rest, orchestrating browser actions as requested.
Do You Need to Configure Anything Special?
- Mostly familiar: If you’ve set up MCP servers before, this is nearly the same—just add/enable the browser automation server and extension.
- Extension permissions: Accept any prompted permissions for automation.
- Use your real browser: Automation works in your own session, so it keeps you logged in and avoids CAPTCHAs.
Bottom line: If you’re already using MCPs in Cursor, automation is a quick upgrade—with serious productivity gains, especially for repetitive browser tasks and testing. Give it a try!
🧠 Obsidian & NotebookLM Integration—A Scandinavian Fintech Lens
Core Integration Approach:
This is not a seamless sync, but a manual workflow: export your Obsidian notes (typically as PDFs) and upload them to Google NotebookLM for AI analysis. Your data moves from private storage to Google’s cloud.
What’s Really New for Obsidian Users:
You’ll need to leave Obsidian’s local, private system and work partly in Google’s environment to access new features. This can change how—and where—you manage your notes.
Impact on Regulated Scandinavian Fintech:
Transferring notes to Google’s servers may violate local and EU regulations (GDPR, PSD2, etc.). Sensitive client or business data should remain strictly local.
Usability Limitations:
- No direct syncing; every update is a manual export/import.
- You lose Obsidian’s linking and plugin features within NotebookLM.
- Notebooks in NotebookLM can’t be cross-referenced.
Best Use Case:
Only use this for non-sensitive content—like public research and general market information. Never process client or confidential data through NotebookLM: keep all regulated information in Obsidian.
That’s a wrap for this week’s AI developments! The pace of innovation continues to accelerate, with new tools and capabilities emerging regularly. Stay tuned for next week’s update.