PJFP.com

Pursuit of Joy, Fulfillment, and Purpose

Tag: Code Generation

  • Composer: Building a Fast Frontier Model with Reinforcement Learning

    Composer represents Cursor’s most ambitious step yet toward a new generation of intelligent, high-speed coding agents. Built through deep reinforcement learning (RL) and large-scale infrastructure, Composer delivers frontier-level results at speeds up to four times faster than comparable models:contentReference[oaicite:0]{index=0}. It isn’t just another large language model; it’s an actively trained software engineering assistant optimized to think, plan, and code with precision — in real time.

    From Cheetah to Composer: The Evolution of Speed

    The origins of Composer go back to an experimental prototype called Cheetah, an agent Cursor developed to study how much faster coding models could get before hitting usability limits. Developers consistently preferred the speed and fluidity of an agent that responded instantly, keeping them “in flow.” Cheetah proved the concept, but it was Composer that matured it — integrating reinforcement learning and mixture-of-experts (MoE) architecture to achieve both speed and intelligence.

    Composer’s training goal was simple but demanding: make the model capable of solving real-world programming challenges in real codebases using actual developer tools. During RL, Composer was given tasks like editing files, running terminal commands, performing semantic searches, or refactoring code. Its objective wasn’t just to get the right answer — it was to work efficiently, using minimal steps, adhering to existing abstractions, and maintaining code quality:contentReference[oaicite:1]{index=1}.

    Training on Real Engineering Environments

    Rather than relying on synthetic datasets or static benchmarks, Cursor trained Composer within a dynamic software environment. Every RL episode simulated an authentic engineering workflow — debugging, writing unit tests, applying linter fixes, and performing large-scale refactors. Over time, Composer developed behaviors that mirror an experienced developer’s workflow. It learned when to open a file, when to search globally, and when to execute a command rather than speculate.

    Cursor’s evaluation framework, Cursor Bench, measures progress by realism rather than abstract metrics. It compiles actual agent requests from engineers and compares Composer’s solutions to human-curated optimal responses. This lets Cursor measure not just correctness, but also how well the model respects a team’s architecture, naming conventions, and software practices — metrics that matter in production environments.

    Reinforcement Learning as a Performance Engine

    Reinforcement learning is at the heart of Composer’s performance. Unlike supervised fine-tuning, which simply mimics examples, RL rewards Composer for producing high-quality, efficient, and contextually relevant work. It actively learns to choose the right tools, minimize unnecessary output, and exploit parallelism across tasks. The model was even rewarded for avoiding unsupported claims — pushing it to generate more verifiable and responsible code suggestions.

    As RL progressed, emergent behaviors appeared. Composer began autonomously running semantic searches to explore codebases, fixing linter errors, and even generating and executing tests to validate its own work. These self-taught habits transformed it from a passive text generator into an active agent capable of iterative reasoning.

    Infrastructure at Scale: Thousands of Sandboxed Agents

    Behind Composer’s intelligence is a massive engineering effort. Training large MoE models efficiently requires significant parallelization and precision management. Cursor’s infrastructure, built with PyTorch and Ray, powers asynchronous RL at scale. Their system supports thousands of simultaneous environments, each a sandboxed virtual workspace where Composer experiments safely with file edits, code execution, and search queries.

    To achieve this scale, the team integrated MXFP8 MoE kernels with expert and hybrid-sharded data parallelism. This setup allows distributed training across thousands of NVIDIA GPUs with minimal communication cost — effectively combining speed, scale, and precision. MXFP8 also enables faster inference without any need for post-training quantization, giving developers real-world performance gains instantly.

    Cursor’s infrastructure can spawn hundreds of thousands of concurrent sandboxed coding environments. This capability, adapted from their Background Agents system, was essential to unify RL experiments with production-grade conditions. It ensures that Composer’s training environment matches the complexity of real-world coding, creating a model genuinely optimized for developer workflows.

    The Cursor Bench and What “Frontier” Means

    Composer’s benchmark performance earned it a place in what Cursor calls the “Fast Frontier” class — models designed for efficient inference while maintaining top-tier quality. This group includes systems like Haiku 4.5 and Gemini Flash 2.5. While GPT-5 and Sonnet 4.5 remain the strongest overall, Composer outperforms nearly every open-weight model, including Qwen Coder and GLM 4.6:contentReference[oaicite:2]{index=2}. In tokens-per-second performance, Composer’s throughput is among the highest ever measured under the standardized Anthropic tokenizer.

    Built by Developers, for Developers

    Composer isn’t just research — it’s in daily use inside Cursor. Engineers rely on it for their own development, using it to edit code, manage large repositories, and explore unfamiliar projects. This internal dogfooding loop means Composer is constantly tested and improved in real production contexts. Its success is measured by one thing: whether it helps developers get more done, faster, and with fewer interruptions.

    Cursor’s goal isn’t to replace developers, but to enhance them — providing an assistant that acts as an extension of their workflow. By combining fast inference, contextual understanding, and reinforcement learning, Composer turns AI from a static completion tool into a real collaborator.

    Wrap Up

    Composer represents a milestone in AI-assisted software engineering. It demonstrates that reinforcement learning, when applied at scale with the right infrastructure and metrics, can produce agents that are not only faster but also more disciplined, efficient, and trustworthy. For developers, it’s a step toward a future where coding feels as seamless and interactive as conversation — powered by an agent that truly understands how to build software.

  • Diffusion LLMs: A Paradigm Shift in Language Generation

    Diffusion Language Models (LLMs) represent a significant departure from traditional autoregressive LLMs, offering a novel approach to text generation. Inspired by the success of diffusion models in image and video generation, these LLMs leverage a “coarse-to-fine” process to produce text, potentially unlocking new levels of speed, efficiency, and reasoning capabilities.

    The Core Mechanism: Noising and Denoising

    At the heart of diffusion LLMs lies the concept of gradually adding noise to data (in this case, text) until it becomes pure noise, and then reversing this process to reconstruct the original data. This process, known as denoising, involves iteratively refining an initially noisy text representation.

    Unlike autoregressive models that generate text token by token, diffusion LLMs generate the entire output in a preliminary, noisy form and then iteratively refine it. This parallel generation process is a key factor in their speed advantage.

    Advantages and Potential

    • Enhanced Speed and Efficiency: By generating text in parallel and iteratively refining it, diffusion LLMs can achieve significantly faster inference speeds compared to autoregressive models. This translates to reduced latency and lower computational costs.
    • Improved Reasoning and Error Correction: The iterative refinement process allows diffusion LLMs to revisit and correct errors, potentially leading to better reasoning and fewer hallucinations. The ability to consider the entire output at each step, rather than just the preceding tokens, may also enhance their ability to structure coherent and logical responses.
    • Controllable Generation: The iterative denoising process offers greater control over the generated output. Users can potentially guide the refinement process to achieve specific stylistic or semantic goals.
    • Applications: The unique characteristics of diffusion LLMs make them well-suited for a wide range of applications, including:
      • Code generation, where speed and accuracy are crucial.
      • Dialogue systems and chatbots, where low latency is essential for a natural user experience.
      • Creative writing and content generation, where controllable generation can be leveraged to produce high-quality and personalized content.
      • Edge device applications, where computational efficiency is vital.
    • Potential for better overall output: Because the model can consider the entire output during the refining process, it has the potential to produce higher quality and more logically sound outputs.

    Challenges and Future Directions

    While diffusion LLMs hold great promise, they also face challenges. Research is ongoing to optimize the denoising process, improve the quality of generated text, and develop effective training strategies. As the field progresses, we can expect to see further advancements in the architecture and capabilities of diffusion LLMs.

  • Custom Instructions for ChatGPT: A Deeper Dive into its Implications and Set-Up Process


    TL;DR

    OpenAI has introduced custom instructions for ChatGPT, allowing users to set preferences and requirements to personalize interactions. This is beneficial in diverse areas such as education, programming, and everyday tasks. The feature, still in beta, can be accessed by opting into ‘Custom Instructions’ under ‘Beta Features’ in the settings. OpenAI has also updated its safety measures and privacy policy to handle the new feature.


    As Artificial Intelligence continues to evolve, the demand for personalized and controlled interactions grows. OpenAI’s introduction of custom instructions for ChatGPT reflects a significant stride towards achieving this. By allowing users to set preferences and requirements, OpenAI enhances user interaction and ensures that ChatGPT remains efficient and effective in catering to unique needs.

    The Promise of Custom Instructions

    By analyzing and adhering to user-provided instructions, ChatGPT eliminates the necessity of repeatedly entering the same preferences or requirements, thereby significantly streamlining the user experience. This feature proves particularly beneficial in fields such as education, programming, and even everyday tasks like grocery shopping.

    In education, teachers can set preferences to optimize lesson planning, catering to specific grades and subjects. Meanwhile, developers can instruct ChatGPT to generate efficient code in a non-Python language. For grocery shopping, the model can tailor suggestions for a large family, saving the user time and effort.

    Beyond individual use, this feature can also enhance plugin experiences. By sharing relevant information with the plugins you use, ChatGPT can offer personalized services, such as restaurant suggestions based on your specified location.

    The Set-Up Process

    Plus plan users can access this feature by opting into the beta for custom instructions. On the web, navigate to your account settings, select ‘Beta Features,’ and opt into ‘Custom Instructions.’ For iOS, go to Settings, select ‘New Features,’ and turn on ‘Custom Instructions.’

    While it’s a promising step towards advanced steerability, it’s vital to note that ChatGPT may not always interpret custom instructions perfectly. Misinterpretations and overlooks may occur, especially during the beta period.

    Safety and Privacy

    OpenAI has also adapted its safety measures to account for this new feature. Its Moderation API is designed to ensure instructions that violate the Usage Policies are not saved. The model can refuse or ignore instructions that would lead to responses violating usage policies.

    Custom instructions might be used to improve the model performance across users. However, OpenAI ensures to remove any personal identifiers before these are utilized to improve the model performance. Users can disable this through their data controls, demonstrating OpenAI’s commitment to privacy and data protection.

    The launch of custom instructions for ChatGPT marks a significant advancement in the development of AI, one that pushes us closer to a world of personalized and efficient AI experiences.