How to Fix ESLint Violations with AI Assistance

This ongoing Docker Labs GenAI series will explore the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing things as open source so you can play, explore, and hack with us, too.

In a previous article in this series, we discussed linters, and we talked about running ESLint with an AI assistant. In this post, we will cover AI-assisted violation resolutions. 

Before an AI assistant can take action on violations via the editor or Git (Figure 1), we first need to answer the question: Can an LLM generate resolutions for ESLint violations with the right knowledge and tools?

Figure 1: Fixing ESLint violations.

Fix any lint violations you can in my project.

ESLint violations

To resolve a violation, the LLM needs information about the violation from the linter, and some context about the source code causing it. A level of human supervision may also be necessary. We want to optimize to shrink context, because models both limit context windows and yield better results with less context. This makes us wonder: “What is the minimum amount of context and supervision necessary to resolve a lint violation?”

For example, without human instruction, our assistant can’t know how to balance our parentheses:

// export const dockerfileRegex = new RegExp('(.*.?Dockerfile', 'i');
// Goal: new RegExp('(.*).?Dockerfile', 'i');

Your `dockerfileRegex` regular expression is currently unterminated. This means the group you started at `(.*.?Dockerfile` doesn't have an ending bracket. Here is a fixed version of it:
“`typescript
export const dockerfileRegex = new RegExp('(.*.?Dockerfile)', 'i');
“`
I added a ")" to close the group.

Focusing on certain violations

We want to narrow the list of ESLint rules to violations the LLM can help with.

In Figure 2, we have compiled every ESLint rule that:

Is recommended

Is not auto-fixable by ESLint itself

And we want rules that:

Do not require supervision to make reasonable fixes.

Figure 2: Violations that the LLM can help with.

This context yields four total groups of violations:

Group 1 (No context needed from the surrounding source code or supervision needed.) Scripts can be applied to fix these violations.

Group 2 (No context needed from the surrounding source code, but these might be mistakes that necessitate the LLM to evaluate some of the code.) LLM can generate scripts to resolve these violations, but we would be assuming violations are mistakes. There are some violations that the LLM can evaluate on its own.

Group 3 (Context is needed from surrounding source code, but no supervision is necessary.) This is the best opportunity for our LLM to provide help. We can use tools to pull the right context in.

Group 4 (Context is needed from surrounding source code and supervision might be needed.)

Invalid Regex

Unsafe optional chaining

Constant condition

Depends a lot on the exact situation, but LLMs may be useful?

Thankfully, nearly all the violations could have reasonable fixes applied without supervision. These are the violations that we will focus on.

Initial prompts

First, we create the prompts to attempt to fix ESLint violations.

You are an AI assistant who specializes in resolving lint violations in projects. Use the tools available to quickly take action and be very brief.
1. Run linter.
2. Evaluate total violations.
// What to do?

Unfortunately, we run into a roadblock when it comes to learning about lint violations from our ESLint tool. When using summary output, we don’t have enough information to know what we’re fixing. However, when using JSON output, we found that as few as 100 violations in a project caused ESLint to send over 10,000 characters over standard out. That would be a problem since many models at current limit us to 4-8k tokens. We need a way to persist this large output but without consuming tokens.

Artifacts

While we want to use ESLint, it can easily balloon an AI conversation past the model’s context size. To address this, we modified the ESLint tool to write eslint.json to a new ephemeral Docker volume. Reading and writing to this volume serves as another conversation between tools, but sandboxed away from the LLM’s context-sensitive function call output. 

We added tools to consume the ESLint artifact in different ways depending on the number of violations summarized.

– name: parse_lint_results
description: Loads lint violations grouped by type.
parameters:
type: object
properties:
outputLevel:
type: string
description: Supports condensed or complaints
container:
image: vonwig/read_eslint

Now we can use this tool to finish our prompt.

You are an AI assistant who specializes in resolving lint violations in projects. Use the tools available to quickly take action and be very brief.
1. Run lint. // Will write /thread/eslint.json
2. Evaluate total violations. // Will read /thread/eslint.json
<10 violations: Parse output with complaints output.
10+ violations: Parse output with condensed output.
3. Fix the violations using the following steps:
## Condensed:
{>fixing}

A user prompt of:

1. Get the line Use read_eslint with the `path` arg to get all of the violations for a file.
2. Make the correction Respond in the following format:

“`json
{
"start": [1, 4],
"end": [2, 4],
"edit": "Lorem ipsum"
}
“`

Once you have fixed one file, move on to the next.
You are able to fix the following violations:
no-async-promise-executor

yields:

The violation is located in the file src/utils/promiseUtils.ts at line 30-31. The current code is:
“`javascript
return new Promise(async (resolve, reject) => {

});
“`

To resolve the no-async-promise-executor violation, remove the async keyword from the `Promise` executor function. The corrected code would look like:

“`javascript
return new Promise((resolve, reject) => {

});
“`

Unfortunately, we’re still not seeing a good result. Without the violating context from the code, again, our assistant can’t provide an actionable fix. So we need to make a tool to get that.

Tree Sitter

Without the right tools, the LLM is, at best, vague and helpful — and at worse quite liable to hallucinate. We need tools to let the LLM be actionable here. Tree-sitter is a popular open source tool we can use to parse a variety of languages into abstract syntax trees (or ASTs). 

The powerful part of Tree-sitter that we want to take advantage of is the community of grammar authors providing support for almost any language our assistant could encounter. We have Dockerized this tool to get context given a file path and line number.

Results

We have trimmed debug finish responses from the payload but kept function calls.

## ROLE assistant
…({:function {:name run_lint, :arguments {}}, :id call_MaK7CGpOISktz1fZP3GS7m4H})

## ROLE assistant
…………………….({:function {:name run-eslint, :arguments {
"args": ".",
"version": 8,
"outputLevel": "summary"
}}, :id call_RHX2hztWvghxbyZx5cvSXgMU})

## ROLE tool (run-eslint)
Running npx with args: eslint@8 –format json .
Peparing output
Found 41 violations in 271 files.

## ROLE tool (run_lint)
I ran ESLint (version 8) on your entire project. Here are the results:

– 271 files were checked for violations
– Out of these, 41 files had one or more violations

## ROLE assistant
………….({:function {:name parse_lint_results, :arguments {
"outputLevel": "condensed"
}}, :id call_sYHP8tXuKbXGaXdNCdO3W5kU})

## ROLE tool (parse_lint_results)
{ "@typescript-eslint/no-floating-promises": [ "src/commands/help.ts",
"src/commands/registerWorkspaceCommand.ts" ], "@typescript-eslint/no-explicit-any": [ "src/commands/registerLocalCommand.ts" ], "@typescript-eslint/naming-convention": [ "src/constants.ts" ], "eqeqeq": [ "src/debugging/DockerDebugConfigurationProvider.ts", "src/debugging/DockerServerReadyAction.ts", "src/runtimes/files/ContainerFilesProvider.ts", "src/tree/LocalRootTreeItemBase.ts" ], "@typescript-eslint/no-unused-vars": [ "src/extension.ts", "src/scaffolding/wizard/netCore/NetCoreGatherInformationStep.ts" ], "@typescript-eslint/semi": [ "src/utils/registryExperience.ts" ] }

## ROLE assistant
……………({:function {:name violations_for_file, :arguments {
"path": "src/commands/help.ts"
}}, :id call_fAN5kTrXndaNs7Ix4iH51E4T})

## ROLE tool (violations_for_file)
Getting eslint.json for path: src/commands/help.ts
ESLint violations for src/commands/help.ts:
{ "filePath": "src/commands/help.ts", "messages": [ { "ruleId": "@typescript-eslint/no-floating-promises", "severity": 1, "message": "Promises must be awaited, end with a call to .catch, end with a call to .then with a rejection handler or be explicitly marked as ignored with the void operator.", "line": 35, "column": 5, "nodeType": "ExpressionStatement", "messageId": "floatingVoid", "endLine": 35, "endColumn": 100] }

## ROLE assistant
………………….({:function {:name run_tree_sitter, :arguments {
"path": "src/commands/help.ts",
"line": 35
}}, :id call_cKnNRQs5hmMsTszTfDnwfK12})

## ROLE tool (run_tree_sitter)
{
offending_line: '}',
line_node: SyntaxNode {
type: },
startPosition: {row: 35, column: 0},
endPosition: {row: 35, column: 1},
childCount: 0,
},
parent: SetNode {
type: set,
startPosition: {row: 33, column: 45},
endPosition: {row: 35, column: 1},
childCount: 4,
},
parent_text: '{n' +
" vscode.env.openExternal(vscode.Uri.parse('https://github.com/microsoft/vscode-docker/issues'));n" +
'}'
}

## ROLE assistant
The violation on line 35 of the `src/commands/help.ts` file is due to a floating promise. To resolve this, add the void operator to the beginning of the line as below:
“`js
void vscode.env.openExternal(vscode.Uri.parse('https://github.com/microsoft/vscode-docker/issues'));
“`.

This result demonstrates that the LLM can orchestrate a small code improvement using the right tools.

Failures

Unfortunately, it terminates after coming to the end of the first violation rather than continuing on to the next. This seems to be difficult to change, even trying two other combinations of prompts failed to get a loop going.

Fix any lint violations you can using the tools provided.
After resolving one violation, move onto the next.
Once you have fixed one file, move on to the next.

Successful prompt

However, what we did find was that specifying the lint violation to focus on did work:

Please only fix no-unused-vars

yields:

## ROLE assistant
The violation in the file "src/extension.ts" is a declared variable 'unuseVar' at line 41, which is not used anywhere. The fix would be the removal of that line..nil

Figure 3 lists a few interesting violations we tested.

Figure 3: Violations tested.

Overall, we can see that our assistant, when given the right tools to find and fix violations, can make good code change suggestions for even difficult lint violations. We welcome you to check out the code and test with different violations that we didn’t cover.

Summary

We continue to investigate the use of tools for gathering context and improving suggestions. In this installment, we have looked at how AI assistants can provide significant value to developers by:

Using a linter to learn about violations in a TS project without Node, NPM, or ESLint.

Leveraging an ephemeral volume to store large context without using tokens.

Using Tree_sitter to load precise, relevant code context from a project.

Generating fixes for violations in a TS project.

As always, feel free to follow along in our new public repo and please reach out. Everything we’ve discussed in this blog post is available for you to try out on your own projects.

Learn more

Subscribe to the Docker Newsletter.

Read the Docker Labs GenAI series.

Get the latest release of Docker Desktop.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Optimizing AI Application Development with Docker Desktop and NVIDIA AI Workbench

Are you looking to streamline how to incorporate LLMs into your applications? Would you prefer to do this using the products and services you’re already familiar with? This is where Docker Desktop, especially when paired with the advanced capabilities offered by Docker’s Business subscription tier, comes into play — particularly when combined with NVIDIA’s cutting-edge technology.

Imagine a development environment where setting up and managing AI workloads is as intuitive as the everyday tools you’re already using. With our deepening partnership with NVIDIA, we are committed to making this a reality. This collaboration not only enhances your ability to leverage Docker containers but also significantly improves your overall experience of building and developing AI applications.

What’s more, this partnership is designed to support your long-term growth and innovation goals. Docker Desktop with Docker Business, combined with NVIDIA software, provides the perfect launchpad for developers who want to accelerate their AI development journey — whether it’s building prototypes or deploying enterprise-grade AI applications. This isn’t just about providing tools; it’s about investing in your abilities, your career, and the innovation capabilities of your organization.

With Docker Business, you gain access to advanced capabilities that enhance security, streamline management, and offer unparalleled support. Meanwhile, NVIDIA AI Workbench provides a robust, containerized environment tailored for AI and machine learning projects. Together, these solutions empower you to push the boundaries of what’s possible, bringing AI into your applications more effortlessly and effectively.

What is NVIDIA AI Workbench?

NVIDIA AI Workbench is a free developer toolkit powered by containers that enables data scientists and developers to create, collaborate, and migrate AI workloads and development environments across GPU systems. It targets scenarios like model fine-tuning, data science workflows, retrieval-augmented generation, and more. Users can install it on multiple systems but drive everything from a client application that runs locally on Windows, Ubuntu, and macOS. NVIDIA AI Workbench helps enable collaboration and distribution through Git-based platforms, like GitHub and GitLab. 

How does Docker Desktop relate to NVIDIA AI Workbench?

NVIDIA AI Workbench requires a container runtime. Docker’s container runtime (Docker Engine), delivered through Docker Desktop, is the recommended AI Workbench runtime for developers using AI Workbench on Windows and macOS. Previously, AI Workbench users had to install Docker Desktop manually. With this newest release of AI Workbench, developers who select Docker as their container runtime will have Docker Desktop installed on their machine automatically, with no manual steps required.

 You can learn about this integration in NVIDIA’s technical blog.

Moving beyond the AI application prototype

Docker Desktop is more than just a tool for application development; it’s a launchpad that provides an integrated, easy-to-use environment for developing a wide range of applications, including AI. What makes Docker Desktop particularly powerful is its ability to seamlessly create and manage containerized environments, ensuring that developers can focus on innovation without worrying about the underlying infrastructure.

For developers who have already invested in Docker, this means that the skills, automation, infrastructure, and tooling they’ve built up over the years for other workloads are directly applicable to AI workloads as well. This cross-compatibility offers a huge return on investment, as it allows teams to extend their existing Docker-based workflows to include AI applications and services without needing to overhaul their processes or learn new tools.

Docker Desktop’s compatibility with Windows, macOS, and Linux makes it an ideal choice for diverse development teams. Its robust features support a wide range of development workflows, from initial prototyping to large-scale deployment, ensuring that as AI applications move from concept to production, developers can leverage their existing Docker infrastructure and expertise to accelerate and scale their work.

For those looking to create high-quality, enterprise-grade AI applications, Docker Desktop with Docker Business offers advanced capabilities. These include enhanced security, management, and support features that are crucial for enterprise and advanced development environments. With Docker Business, development teams can build securely, collaborate efficiently, and maintain compliance, all while continuing to utilize their existing Docker ecosystem. By leveraging Docker Business, developers can confidently accelerate their workflows and deliver innovative AI solutions with the same reliability and efficiency they’ve come to expect from Docker.

Accelerating developer innovation with NVIDIA GPUs

In the rapidly evolving landscape of AI development, the ability to leverage GPU capabilities is crucial for handling the intensive computations required for tasks like model training and inference. Docker is working to offer flexible solutions to cater to different developers, whether you have your own GPUs or need to leverage cloud-based compute. 

Running containers with NVIDIA GPUs through Docker Desktop 

GPUs are at the heart of AI development, and Docker Desktop is optimized to leverage NVIDIA GPUs effectively. With Docker Desktop 4.29 or later, developers can configure CDI support in the daemon and easily make all NVIDIA GPUs available in a running container by using the –device option via support for CDI devices.

For instance, the following command can be used to make all NVIDIA GPUs available in a container:

docker run –device nvidia.com/gpu=all <image> <command>

For more information on how Docker Desktop supports NVIDIA GPUs, refer to our GPU documentation.

No GPUs? No problem with Testcontainers Cloud

Not all developers have local access to powerful GPU hardware. To bridge this gap, we’re exploring GPU support in Testcontainers Cloud. This will allow developers to access GPU resources in a cloud environment, enabling them to run their tests and validate AI models without needing physical GPUs. With Testcontainers Cloud, you will be able to harness the power of GPUs from anywhere, democratizing high-performance AI development.

Trusted AI/ML content on Docker Hub

Docker Desktop provides a reliable and efficient platform for developers to discover and experiment with new ideas and approaches in AI development. Through its trusted content program, Docker selects and curates with open source and commercial communities high-quality images and distributes them on Docker Hub, under Docker Official Images, Docker Sponsored Open Source, and Docker Verified Publishers. With a wealth of AI/ML content, Docker makes it easy for users to discover and pull images for quick experimentation. This includes various images, such as NVIDIA software offerings and many more, allowing developers to get started quickly and efficiently.

Accelerated builds with Docker Build Cloud

Docker Build Cloud is a fully managed service designed to streamline and accelerate the building, testing, and deployment of any application. By leveraging Docker Build Cloud, AI application developers can shift builds from local machines to remote BuildKit instances — resulting in up to 39x faster builds. By offloading the complex build process to Docker Build Cloud, AI development teams can focus on refining their models and algorithms while Docker handles the rest.

Docker Business users can experience faster, more efficient builds and reproducible AI deployments with Docker Build Cloud minutes as part of their subscription.

Ensuring quality with Testcontainers

As AI applications evolve from prototypes to production-ready solutions, ensuring their reliability and performance becomes critical. This is where testing frameworks like Testcontainers come into play. Testcontainers allows developers to test their applications using real containerized dependencies, making it easier to validate application logic that utilize AI models in self-contained, idempotent, reproducible ways. 

For instance, developers working with LLMs can create Testcontainers-based tests that will test their application by utilizing any model available on Hugging Face utilizing the recently released Ollama container.  

Wrap up

The collaboration between Docker and NVIDIA marks a significant step forward in the AI development landscape. By integrating Docker Desktop into NVIDIA AI Workbench, we are making it easier than ever for developers to build, ship, and run AI applications. Docker Desktop provides a robust, streamlined environment that supports a wide range of development workflows, from initial prototyping to large-scale deployment. 

With advanced capabilities from Docker Business, AI developers can focus on innovation and efficiency. As we deepen our partnership with NVIDIA, we look forward to bringing even more enhancements to the AI development community, empowering developers to push the boundaries of what’s possible in AI and machine learning. 

Stay tuned for more exciting updates as we work to revolutionize AI application development.

Learn more

Get started with Docker Desktop on NVIDIA AI Workbench by installing today.

Authenticate and update to receive your subscription level’s newest Docker Desktop features.

New to Docker? Create an account. 

Subscribe to the Docker Newsletter.

Check out NVIDIA’s Docker Hub Library.

Get started with RAG application development with Docker’s GenAI Stack.

Quelle: https://blog.docker.com/feed/