This ongoing Docker Labs GenAI series will explore the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real-time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing things as open source so you can play, explore, and hack with us, too.
Can an AI assistant help you write better JavaScript or TypeScript?
Background and introduction
Projects that heavily rely on JavaScript (JS) or TypeScript (TS) are synonymous with the web, so there is a high demand for tools that can improve the consistency and quality of projects using these languages. In previous Docker Labs GenAI posts, we’ve introduced the idea that tools both enable AI assistants to understand our code more and enable them to take action based on this understanding.
In this article, we’ll be trying to enable our AI assistant to provide advice that is both helpful and actionable for linting JS/TS projects and to finally delve into the NPM ecosystem.
Another simple prompt
As we learned in this previous Docker Labs GenAI article, you won’t get much help asking an LLM to tell you how to lint your project without any details. So, like before, we’re using our “linguist” tool to learn about the languages used in the project and augment the prompt (Figure 1):
How do I lint my project?
{{# linguist }}
This project contains code from the language {{ language }} so if you have any
recommendations pertaining to {{ language }}, please include them.
{{/linguist}}
What LLMs provide out of the box
Figure 1: AI assistant responds with information about ESLint.
In Figure 2, we see that GPT-4 recognizes that ESLint is highly configurable and actually doesn’t work without a config, and so it is trying to provide that for us by either helping us run ESLint’s init tool or by writing a config to use.
Figure 2: AI assistant provides information for setting up and running ESLint.
However, this response gives us either a config that does not work for many projects, or a boilerplate setup task for the user to do manually. This is in contrast with other linters, like Pylint or golangci-lint, where linguist was actually enough for the LLM to find a clear path to linting. So, with ESLint, we need to add more knowledge to help the LLM figure this out.
Configuring ESLint
Using StandardJS
StandardJS is a community-led effort to simplify ESLint configurations. Let’s start by nudging the assistant toward using this as a starting point. The ESLint config is published under its own package, StandardJS, so we can add the following prompt:
If there are no ESLint configuration files found, use StandardJS to lint the project with a consistent config.
We will also add a function definition so that our assistant knows how to run StandardJS. Note the container image defined at the bottom of the following definition:
– name: run-standardjs
description: Lints the current project with StandardJS
parameters:
type: object
properties:
typescript:
type: boolean
description: Whether to lint Typescript files
fix:
type: boolean
description: Whether to fix the files
files:
type: array
items:
type: string
description: The filepaths to pass to the linter. Defaults to '.'
required:
– typescript
– fix
container:
image: vonwig/standardjs:latest
This definition will work for both TypeScript and JavaScript projects using an argument. The assistant uses the project content to determine how to optimally set the TypeScript property.
When using StandardJS with TypeScript, two things happen in the container:
Lints with ts-standard instead of standard
Lints ts-standard from the working directory containing tsconfig.json.
But, with the right tools, this behavior is enabled with a single prompt:
When using StandardJS, use Typescript only if there are tsconfigs in the project.
Docker environments
Both ESLint and StandardJS run in Node.js environments. In our current prototype, our assistant uses three different Docker images.
ESLint (includes versions 7-9 of ESLint)
StandardJS (includes standard, ts-standard)
Extractor-ESLint (for extracting the knowledge)
Docker is significant because of the previously mentioned requirement of using ts-standard in a directory with tsconfig.json. When we baked this logic into the Docker image, we effectively introduced a contract bridging the AI Assistant, the linter tool, and the overall structure of the repository.
After determining that a project uses JavaScript or TypeScript, our assistant also adds Git Hooks. (See this GenAI article for details.) Docker gives us a way to reliably distribute these tools.
If we detect `TypeScript` in the project, we can add the following hook to the hooks entry in the `local` repo entry.
“`yaml
id: standardjs-docker-ts
name: standardjs linter (TS and JS)
entry: vonwig/standardjs:latest '{"typescript": true, "fix": false}'
language: docker_image
files ".(ts|tsx)$"
Fixing violations
Linting output comes in the form of violations. A violation is attached to a range in the code file with the offending code and the violation reason. As mentioned previously, 75% of StandardJS violations are automatically fixable. Can we use the AI assistant to automatically fix the remaining violations?
Respond with an edit to resolve the violation using the following JSON format:
{
"start": [1,4],
"end": [1,6],
"edit": "lorem ipsum"
}
If you take, for example, the lint rule for type casting, all of the models we tested will replace === with ==. Here’s the response when we ask for fixes to lines with the no-this-before-super violation:
Here's the proposed edit to resolve the violation:
“`json
{
"start": [1, 4],
"end": [2, 6],
"edit": "class myclass extends base {ntval: number;ntconstructor()
{nttsuper();nttthis.val = 10;nt}n}"
}
“`
If these models are able to fix these violations, why doesn’t ESLint just make them automatically fixable? In many of the cases, they represent riskier changes that still require some developer supervision. Perhaps the best thing an assistant can do is present these auto-fixes to the user directly in their editors. For example, a fix that has been generated by our assistant can be presented in VSCode (Figure 3).
Figure 3: A fix is presented to the user.
Editor complaints
With the rise of tools like GitHub Copilot, developers are now becoming accustomed to assistants being present in their editors (Figure 4).
Figure 4: AI assistant is present in the editor.
Our work is showing that linting tools can improve the quality of these fixes.
For example, when asking Copilot to fix the line from earlier, it lacks the additional context from ESLint (Figure 5).
Figure 5: Additional context is needed.
The assistant is unable to infer that there is a violation there. In this instance, Copilot is hallucinating because it was triggered by the developer’s editor action without any of the context coming in from the linter. As far as Copilot knows, I just asked it to fix perfectly good code.
To improve this, we can use the output of a linter to “complain” about a violation. The editor allows us to surface a quick action to fix the code. Figure 6 shows the same “fix using Copilot” from the “problems” window, triggered by another violation:
Figure 6: “Fix using Copilot” is shown in the problems window.
This is shown in VSCode’s “problems” window, which helps developers locate problems in the codebase. An assistant can use the editor to put the ESLint tool in a more effective relationship with the developer (Figure 7).
Figure 7: A more complete fix.
Most importantly, we get an immediate resolution rather than a hallucination. We’re also hosting these tools in Docker, so these improvements do not require installs of Node.js, NPM, or ESLint.
Summary
We continue to investigate the use of tools for gathering context and improving suggestions. In this article, we have looked at how AI assistants can provide significant value to developers by:
Cutting out busy work setting up Node/NPM/ESLint.
Leveraging expert knowledge about ESLint to “level up” developers
Generating and surfacing actionable fixes directly to developers where they’re already working (in the editor)
Generating simple workflows as outcomes from natural language prompts and tools
As always, feel free to follow along in our new public repo and please reach out. Everything we’ve discussed in this blog post is available for you to try out on your own projects.
For more on what we’re doing at Docker, subscribe to our newsletter.
Learn more
Subscribe to the Docker Newsletter.
Read the Docker Labs GenAI series.
Get the latest release of Docker Desktop.
Vote on what’s next! Check out our public roadmap.
Have questions? The Docker community is here to help.
New to Docker? Get started.
Quelle: https://blog.docker.com/feed/
Published by