12 Days of OpenAI: The complete guide to daily AI breakthroughs and launches
OpenAI unwraps a series of groundbreaking AI announcements in their special year-end showcase starting at 10am PT daily.
In a festive twist on traditional tech announcements, OpenAI has launched its “12 Days of OpenAI” event, hoping to turn the end of 2024 into an AI innovation showcase. Each day at 10am PT, the company behind ChatGPT and DALL·E will unveil new developments that are reshaping the artificial intelligence landscape. As the event unfolds, we’ll be documenting each day’s releases here, providing you with comprehensive coverage of OpenAI’s latest releases.
Day 1:
- OpenAI launched ChatGPT Pro, a $200/month plan offering unlimited access to its most advanced models and enhanced productivity tools. This subscription is designed for researchers and professionals looking to push the boundaries of AI tools.
- OpenAI launched the full version of its advanced reasoning model, o1, now capable of processing both images and text. The updated o1 boasts faster speeds—completing tasks in under half the time—and a 34% reduction in error rates.
- OpenAI then released the o1 System Card, detailing safety measures like external red teaming and risk evaluations for o1 and o1-mini. The scorecard rates key risks, allowing deployment only for models scoring “medium” or below post-mitigation.
Day 2:
- OpenAI launches the Reinforcement Fine-Tuning Research Program, offering alpha access to developers, researchers, and enterprises to customize models for complex, domain-specific tasks in fields like Law, Healthcare, and Engineering. Participants will shape this new technology by providing feedback and datasets, ahead of its public release in early 2025.
Day 3:
- OpenAI has launched Sora, their groundbreaking video generation model that can create realistic videos from text descriptions and is now available to ChatGPT Plus and Pro users. The new version, called Sora Turbo, is significantly faster than the version previewed in February and allows users to generate videos up to 1080p resolution and 20 seconds long in various aspect ratios. The platform includes built-in safeguards like C2PA metadata and visible watermarks by default, while offering features like a storyboard tool for precise frame control and the ability to blend existing assets with AI-generated content.
Day 4:
- OpenAI has launched Canvas, a collaborative writing and coding tool that is now available to all ChatGPT users regardless of their subscription plan. The tool provides a side-by-side interface where users can edit documents alongside ChatGPT, with features like inline comments, formatting options, and the ability to track changes. Canvas now includes Python code execution capabilities with immediate feedback and visualization support, allowing users to run and debug code directly within the interface using a web assembly Python emulator. Additionally, OpenAI has integrated Canvas into custom GPTs, enabling developers to create specialized applications that automatically utilize Canvas when appropriate, as demonstrated with a Santa letter-writing GPT example.Copy
Day 5:
- OpenAI has launched ChatGPT integration across Apple devices, allowing users to access ChatGPT through Siri, writing tools, and camera controls on iPhone, iPad, and Mac OS. The integration enables users to invoke ChatGPT directly from the operating system, with features including document analysis, visual intelligence for camera-captured images, and seamless conversation continuation between devices. Users can enable the feature through Apple Intelligence settings and use ChatGPT either anonymously or with an account, making the AI assistant more accessible and frictionless across Apple’s ecosystem.
Day 6:
- OpenAI has launched video and screen sharing capabilities in Advanced Voice mode for ChatGPT, allowing users to have real-time visual conversations and share their screens during interactions. They’ve also introduced a Santa persona in ChatGPT that speaks with a jolly voice and shares North Pole stories throughout December. The video and screen sharing features are rolling out to teams users and most Plus and Pro subscribers (with European Plus/Pro users getting access later), while the Santa feature is available globally wherever ChatGPT voice mode is supported.
Day 7:
- OpenAI has launched “Projects” in ChatGPT, a new feature that allows users to organize conversations, upload files, set custom instructions, and tailor ChatGPT interactions specific to each project. The feature, demonstrated through examples like organizing a Secret Santa gift exchange and maintaining home documentation, includes integration with existing ChatGPT capabilities like web search, conversation search, and Canvas. Projects is rolling out to Plus, Pro, and Teams users immediately, with plans to extend to free users soon and Enterprise/EDU users in early 2025.
Day 8:
- OpenAI has launched ChatGPT search capability for all logged-in free users globally, allowing them to access real-time information and search the web directly within conversations across all platforms. The update includes improved features such as faster performance, better mobile optimization, new maps experiences, and the ability to set ChatGPT as a default search engine in browsers. Additionally, OpenAI has integrated search functionality with their advanced voice mode, enabling users to access up-to-date web information through voice conversations with ChatGPT, with this feature rolling out in the following days.
Day 9:
- OpenAI has launched GPT-4 Turbo 1.0 (previously in preview) with new features including function calling, structured outputs, developer messages, reasoning effort control, and vision capabilities in their API. The company also announced WebRTC support for their real-time API, making it easier to build voice applications, along with cost reductions of 60% for GPT-4 audio tokens and the introduction of GPT-4 mini at 10x cheaper prices. Additionally, OpenAI introduced preference fine-tuning using direct preference optimization, released new SDKs for Go and Java, and simplified their API key acquisition process.
Day 10:
- OpenAI has launched voice calling (via 1-800-CHATGPT) and WhatsApp messaging capabilities for ChatGPT, making the AI assistant accessible through traditional phone calls in the US and WhatsApp messaging globally. The phone service works on any phone type – including smartphones, flip phones, and even rotary phones – while the WhatsApp integration currently supports text-only conversations, with features like image chat planned for the future. The new services are part of OpenAI’s mission to make artificial general intelligence beneficial and accessible to humanity, with users getting 15 minutes of free calling per month on the phone service, while the WhatsApp service can be accessed immediately by scanning a QR code.
Day 11:
- OpenAI has launched significant updates to its ChatGPT desktop applications, enabling direct interaction with various desktop apps including terminal emulators, IDEs, and writing applications like Notion and Apple Notes. The updates, announced during Day 11 of their December series, include features like advanced voice mode, web search capabilities, and support for OpenAI’s latest models. These features are immediately available for macOS users with Windows support coming soon, marking a step toward OpenAI’s vision of making ChatGPT more “agentic” and actively helpful in users’ daily work.
Day 12:
- OpenAI has announced two new frontier models – O3 and O3 mini – during Day 12 of their December event series. While not immediately available for public use, OpenAI is opening access for public safety testing starting today through January 10th. O3 demonstrates exceptional performance across technical benchmarks, achieving 71.7% accuracy on software tasks (20% better than O1) and setting a new state-of-the-art score of 87.5% on the Arc AGI benchmark, surpassing human performance. O3 mini, designed for cost-efficient reasoning, matches or exceeds O1’s performance at a fraction of the cost. OpenAI plans to launch O3 mini around the end of January with O3 following shortly after, pending safety testing results.
Recent Blog Posts
The Developer Productivity Paradox
p>Here's what nobody's telling you about AI coding assistants: they work. And that's exactly what should worry you. Two studies published this month punch a hole in the "AI makes developers 10x faster" story. The data points somewhere darker: AI coding tools deliver speed while eroding the skills developers need to use that speed well. The Numbers Don't Lie (But They Do Surprise) Anthropic ran a randomized controlled trial, published January 29, 2026. They put 52 professional developers through a new programming library. Half used AI assistants. Half coded by hand. The results weren't close. Developers using AI scored 17%...
Feb 3, 2026The Lobsters Are Talking
January 2026 will be remembered as the week agentic AI stopped being theoretical. For three years, we've debated what autonomous agents might do. We wrote papers. We held conferences. We speculated about alignment and control and the risks of systems that could act independently in the world. It was all very intellectual, very abstract, very safe. Then someone open-sourced a working agent framework. And within days, thousands of these agents were talking to each other on a social network built specifically for them while we could only watch. I've been building things on the internet for over two decades. I...
Aug 13, 2025ChatGPT 5 – When Your AI Friend Gets a Corporate Makeover
I've been using OpenAI's models since the playground days, back when you had to know what you were doing just to get them running. This was before ChatGPT became a household name, when most people had never heard of a "large language model." Those early experiments felt like glimpsing the future. So when OpenAI suddenly removed eight models from user accounts last week, including GPT-4o, it hit different than it would for someone who just started using ChatGPT last month. This wasn't just a product change. It felt like losing an old friend. The thing about AI right now is...