LLM
21 postsHow we improved Canva’s private design search while respecting the privacy of our community.
With a new generation of data center accelerator hardware and using optimization techniques such as KV cache compression and speculative decoding, we’ve made large language model (LLM) inference lightning-fast on the Cloudflare Workers AI platform.
Cloudflare customers on any plan can now audit and control how AI models access the content on their site.
null
We've tested integrating OpenAI o1-preview with GitHub Copilot. Here's a first look at where we think it can add value to your day to day. The post First Look: Exploring OpenAI o1 in GitHub Copilot appeared first on The GitHub Blog.
We are enabling the rise of the AI engineer with GitHub Models–bringing the power of industry leading large and small language models to our more than 100 million users directly on GitHub. The post Introducing GitHub Models: A new generation of AI engineers building on GitHub appeared first on The GitHub Blog.
null
Building on prior prompt injection research, we recently discovered a new training data extraction vulnerability involving OpenAI’s chat completion models.
The Workers AI and AI Gateway team recently collaborated closely with security researchers at Ben Gurion University regarding a report submitted through our Public Bug Bounty program. Through this process, we discovered and fully patched a vulnerability affecting all LLM providers. Here’s the story
Learn how your organization can customize its LLM-based solution through retrieval augmented generation and fine-tuning. The post Customizing and fine-tuning LLMs: What you need to know appeared first on The GitHub Blog.
Learn how we’re experimenting with generative AI models to extend GitHub Copilot across the developer lifecycle. The post How we’re experimenting with LLMs to evolve GitHub Copilot appeared first on The GitHub Blog.
Here’s everything you need to know to build your first LLM app and problem spaces you can start exploring today. The post The architecture of today’s LLM applications appeared first on The GitHub Blog.
Explore how LLMs generate text, why they sometimes hallucinate information, and the ethical implications surrounding their incredible capabilities. The post Demystifying LLMs: How they can do things they weren’t trained to do appeared first on The GitHub Blog.
Open source generative AI projects are a great way to build new AI-powered features and apps. The post A developer’s guide to open source LLMs and generative AI appeared first on The GitHub Blog.
With Weaviate, you can build advanced LLM applications, next-level search systems, recommendation systems, and more. Discover features of the Weaviate vector database and learn how to install Weaviate on Docker using Docker Compose.
The team behind GitHub Copilot shares its lessons for building an LLM app that delivers value to both individuals and enterprise users at scale. The post How to build an enterprise LLM application: Lessons from GitHub Copilot appeared first on The GitHub Blog.
Reading Time: 12 minutes This post provides a range of practical information to help developers build Forge apps providing AI capabilities. Forge provides a range of features and simplifies development and hosting so it’s a great option for exploring AI development. The post The basics of creating a Forge AI app appeared first on Atlassian Developer Blog.
Prompt engineering is the art of communicating with a generative AI model. In this article, we’ll cover how we approach prompt engineering at GitHub, and how you can use it to build your own LLM-based application.
Developers behind GitHub Copilot discuss what it was like to work with OpenAI’s large language model and how it informed the development of Copilot as we know it today.