MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
copilot
Search

GitHub Copilot: Everything you need to know

Monday November 25, 2024. 10:00 AM , from InfoWorld
In 2014, Microsoft Research released an experimental Bing Code Search add-on for Visual Studio and the web. It was a code snippet search tool for C# with a natural language interface, using an index of code from Stack Overflow, MSDN, Dotnetperls, and CSharp411, powered by Bing, running on Azure. The tool included a facility for changing the variable names from those in the snippet to those in your own code, but it didn’t work all that well. The accuracy was 70% to 80% for a single variable substitution, and fell rapidly as more variables needed to be renamed.

The experimental Microsoft Bing Code Search add-on from 2014 eventually evolved into GitHub Copilot. When I reviewed the preview version of GitHub Copilot in 2021, I found that it didn’t always generate good, correct, or even running code, but it was still somewhat useful. At the time, GitHub Copilot was powered by OpenAI Codex, which was based on the GPT-3.5 large language model (LLM), and Copilot considered only the current file for its context.

Two years later (in 2023), I reviewed GitHub Copilot X, a set of technical preview features that extended the original GitHub Copilot with chat and terminal interfaces, support for pull requests, and early adoption of OpenAI’s GPT-4. The GitHub Copilot X preview was greatly improved over the original GitHub Copilot. I found that it could sometimes generate a correct function and set of tests without much human help. It still made mistakes and hallucinated (generated false information), but not nearly as much as it once did.

Since then, GitHub Copilot has continued to get better. It has become more accurate and more reliable, and has added new capabilities including command-line support, code editing, code reviews, and the ability to generate descriptions of changes in pull requests. It has also begun to support additional models beyond OpenAI GPT models.

GitHub Copilot features

The current set of GitHub Copilot features includes generating code suggestions as you type in your IDE, chatting with you about code and related topics (such as algorithms and data structures), and helping you use the command line. If you have an Enterprise subscription, Copilot can generate a description of the changes in a pull request, and manage knowledge bases to use as a context for chats. There are also several features in preview for Copilot Workspace, which we’ll discuss later on.

You can currently use GitHub Copilot in your IDE (integrated development environment), if your IDE is supported (see the list in the next section). You can use Copilot in GitHub Mobile for Android, iOS, and iPadOS, as a chat interface. You can use Copilot on the command line, through the GitHub CLI. And you can use it on the GitHub website through a chat interface, currently marked “beta.” If you have a Business or Enterprise subscription, your administrators will have additional controls, logs, and reports.

Although GitHub Copilot is typically updated monthly, it doesn’t necessarily improve on each update. There have been months when its efficacy, as measured by benchmarks, goes down instead of up. That seems to happen when the model is trained on code in more programming languages or more frameworks, and when it is trained to eliminate some of the ways it goes off the rails. Sometimes the changes are noticeable in ordinary usage, and sometimes they are not. Occasionally there is a big improvement, for example when Copilot started including all open IDE files in its context instead of just the active file, and when OpenAI upgraded the underlying model to a new generation of GPT.

GitHub Copilot integrated with editors

GitHub Copilot is integrated with and officially supported in Azure Data Studio, JetBrains IDEs, Vim/Neovim, Visual Studio, and Visual Studio Code. There is unofficial support for Copilot in emacs, Eclipse, and Xcode. Official support for Apple’s Xcode was announced at GitHub Universe in October 2024.

GitHub Copilot can make inline code suggestions in several ways. Give it a good descriptive function name, and it will generate a working function at least some of the time—less often if it doesn’t have much context to draw on, more often if it has a lot of similar code to use from your open files or from its training corpus.

The same qualifications apply to generating a block of code from an inline comment. Being specific about what you want also helps a great deal. If you say something vague like “sort the list,” it might choose any known sort algorithm, including a bubble sort. If you say “sort the list in-memory using a QuickSort algorithm that drops to an insertion sort for short runs and has a randomized pivot point,” it will probably do exactly what you asked, which will be much more efficient than the bubble sort.

Test generation is generally easier to automate than initial code generation. GitHub Copilot will often generate a reasonably good suite of unit tests on the first or second try from a vague comment that includes the word “tests,” especially if you have an existing test suite open elsewhere in the editor. It will usually take your hints about additional unit tests, as well, although you might notice a lot of repetitive code that really should be refactored. Refactoring often works better in Copilot Chat. Copilot can also generate integration tests, but you may have to give it hints about the scope, mocks, specific functions to test, and the verification you need.

The downside of using GitHub Copilot to generate code or tests is that it isn’t consistent or trustworthy. You have to take the code generated by Copilot (or any LLM) with a huge grain of salt, and plan for a lot of code review and testing, just as you would for a pull request from an unknown programmer, or a check-in from a junior programmer with a drinking problem.

Note that the newest OpenAI models, the o1 family, are much better in this regard than the old GPT family, but not hallucination-free. (See “ChatGPT o1-preview excels at code generation.”)

GitHub Copilot Chat

GitHub Copilot Chat in your editor can give code suggestions, explain code, generate unit tests, and suggest code fixes. It can also iterate with you, almost like a real pair programmer.

In the latest version of GitHub Copilot Chat, you can specify chat “participants” with keywords prefixed with @, such as @workspace, and variables prefixed with #, such as #file. You can select the model you want to use in Copilot Chat in the “Ask Copilot” box when you submit a query. Currently the options are GPT 4o, o1-mini (preview), and o1-preview (preview), all of which are from OpenAI. The documentation also mentions Claude 3.5 Sonnet, from Anthropic, which rolled out to me as another preview during the course of writing this article. At the GitHub Universe 2024 announcement that discussed new models under the slogan of “developer choice,” the speaker mentioned that Google’s Gemini 1.5 Pro would be added in a few weeks.

As of Visual Studio Code Version 1.95, the GitHub Copilot Chat sidebar has moved from the left, where it had to share space with many other commonly-used functions, to a secondary sidebar at the right. You can also invoke Copilot Chat inline, in a quick chat drop-down menu, and in smart actions from the context menu.

Asking Copilot Chat for /help (a slash command) currently gives this response:

GitHub Copilot

You can ask me general programming questions, or chat with the following participants which have specialized expertise and can perform actions:

@workspace – Ask about your workspace

/explain – Explain how the code in your active editor works

/tests – Generate unit tests for the selected code

/fix – Propose a fix for the problems in the selected code

/new – Scaffold code for a new file or project in a workspace

/newNotebook – Create a new Jupyter Notebook

/fixTestFailure – Propose a fix for the failing test

/setupTests – Set up tests in your project (Experimental)

@vscode – Ask questions about VS Code

/search – Generate query parameters for workspace search

/runCommand – Search for and execute a command in VS Code

/startDebugging – Generate launch config and start debugging in VS Code (Experimental)

@terminal – Ask how to do something in the terminal

/explain – Explain something in the terminal

@github – Get answers grounded in web search, code search, and your enterprise’s knowledge bases

You can also help me understand your question by using the following variables to give me extra context:

#selection – The current selection in the active editor

#codebase – Searches through the codebase and pulls out relevant information for the query.

#editor – The visible source code in the active editor

#terminalLastCommand – The active terminal’s last run command

#terminalSelection – The active terminal’s selection

#file – Choose a file in the workspace

To have a great conversation, ask me questions as if I was a real programmer:

Show me the code you want to talk about by having the files open and selecting the most important lines.

Make refinements by asking me follow-up questions, adding clarifications, providing errors, etc.

Review my suggested code and tell me about issues or improvements, so I can iterate on it.

You can also ask me questions about your editor selection by starting an inline chat session (⌘K I).

Learn more about GitHub Copilot in Visual Studio Code.

GitHub Copilot Edits (preview)

I can currently see GitHub Copilot Edits in my Visual Studio Code installations (both standard and Insiders), but not in the documentation. It was demonstrated at the GitHub Universe 2024 keynote.

Basically, GitHub Copilot Edits asks you to define a set of files with which you wish to work, and then define the changes you want to make. Copilot Edits runs in the same right-hand sidebar as Copilot Chat. The major difference between the two is that Copilot Edits makes multi-file changes, but Copilot Chat doesn’t, even though Copilot Chat can use multiple files for context.

GitHub Copilot Code Reviews (preview)

GitHub Copilot Code Reviews can review your code in two ways, and provide feedback. One way is to review your highlighted code selection (Visual Studio Code only, open public preview, any program­ming language), and the other is to more deeply review all your changes (VS Code and GitHub website, public preview with waitlist). Deep reviews can use custom coding guidelines. They are also currently restricted to C#, Go, Java, JavaScript, Markdown, Python, Ruby, and TypeScript.

GitHub Copilot in the CLI

You can use GitHub Copilot with the GitHub CLI to help you with shell commands, as long as the gh command is installed and up to date. Asking the command for help returns:

% gh copilot --help
Your AI command line copilot.

Usage:
copilot [command]

Examples:

$ gh copilot suggest 'Install git'
$ gh copilot explain 'traceroute github.com'

Available Commands:
alias Generate shell-specific aliases for convenience
config Configure options
explain Explain a command
suggest Suggest a command

Flags:
-h, --help help for copilot
--hostname string The GitHub host to use for authentication
-v, --version version for copilot

Use 'copilot [command] --help' for more information about a command.

GitHub Copilot programming language support

GitHub Copilot provides suggestions for many programming languages and frameworks, but the best support is for Python, JavaScript, TypeScript, Ruby, Go, C#, and C++, since those languages were the most prevalent in the training corpus. GitHub Copilot can also assist in query generation for databases, and in generating suggestions for APIs and frameworks and infrastructure as code.

GitHub Copilot Extensions (public preview)

There are currently 27 GitHub Copilot Extensions that you can add to your account and call from GitHub Copilot Chat by using their @-prefixed name. Examples include @models and @perplexityai. While I have been able to install and authenticate these two and some others, I haven’t found them terribly useful so far.

You can write new extensions if you wish. GitHub Copilot Extensions are essentially GitHub Apps with additional read access to GitHub Copilot Chat, integration with the GitHub Copilot API, and optional integration into other LLMs. To publish an extension, it must be owned by an organization account with Verified Creator status. To publish a paid plan for your app on the GitHub Marketplace, your app must be owned by an organization that is a verified publisher.

GitHub Copilot Workspace (private technical preview)

GitHub Copilot Workspace is an “AI-native” development environment that allows you to collaborate with GitHub Copilot on repo-wide coding tasks, using natural language and integrated cloud compute. Copilot Workspace is “task-centric,” meaning that you can start with a GitHub issue, an ad hoc task from the Copilot Workspace dashboard, or an ad hoc task from a repository page. In the first case, the GitHub issue is already defined, so you just use the “Open in Workspace” button to get Copilot Workspace to figure out how to solve it. In the other two cases, you’ll have to define a draft issue and then pass it to Copilot Workspace to solve.

How is GitHub Copilot trained?

GitHub Copilot originally used the OpenAI Codex model, which was essentially GPT-3 additionally trained on lots of open-source code, especially Python code, in GitHub repositories. Later iterations used GPT-4, then GPT 4o, and now a selection of models trained in different ways.

Concerns about GitHub Copilot

The earliest public concerns about GitHub Copilot are summarized in a 2022 class-action lawsuit alleging that GitHub Copilot represents a breach of contract with GitHub’s users and a breach of privacy that shares personally identifiable information. The suit was dismissed by a US District Court judge in San Francisco in July 2024, though the judge declined to dismiss the plaintiffs’ claim for breach of contract of open-source license violations against all defendants.

Apple released a study in October 2024 that concluded that LLMs can’t really perform genuine logical reasoning. Because programming requires logical reasoning, that implies that LLMs can’t code. That fits in with the description of LLMs as “stochastic parrots.” There have also been concerns that the use of OpenAI Codex and similar models may lead students to over-reliance and plagiarism. Others summarize the issue by saying that using models to program makes programmers stupid.

GitHub Copilot competitors

Currently, there are at least a dozen competitors to GitHub Copilot. They include at least Tabnine, Codeium, CodeGeeX, Replit Ghostwriter, Devin AI, JetBrains AI, Sourcegraph Cody, and Amazon Q Developer, if you want to limit yourself to models embedded in code editors. If you broaden the definition of competition to include chat models that can generate code, then you have to consider multiple models from OpenAI, Anthropic, Google, Mistral, Meta, and several other companies. You can also consider Visual Studio Code alternatives, such as Zed and Cursor (see “Two good Visual Studio Code alternatives”), as well as “next-generation” AI coding products, such as Solver and Zencoder.

Prior to the GitHub Universe 2024 conference, I wondered whether GitHub Copilot was being eclipsed by more capable coding plug-ins, such as Tabnine and Amazon Q Developer, or by the likes of Zed, Cursor, Solver, Zencoder, or other up-and-comer. Now I wonder whether any of those other products will be able to leapfrog VS Code and GitHub Copilot. I don’t count the competitors out, though. Stay tuned.
https://www.infoworld.com/article/3609013/github-copilot-everything-you-need-to-know.html

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Nov, Thu 28 - 07:36 CET