OpenSumi is a leading open-source IDE research and development framework, noted for its high performance and a high degree of customization. This framework provides developers with a suite of necessary tools and components for creating integrated development environments that can run on both Web and Electron. Unlike off-the-shelf IDEs like VS Code and IntelliJ IDEA, OpenSumi emphasizes its identity as an expandable IDE framework, focusing on reducing the difficulty of personalized customization, thus enabling developers to easily integrate personalized features to meet specific business needs.
The Development of OpenSumi:
- In 2019, it was co-founded by Alibaba Group and Ant Group and began closed development.
- In 2020, version 1.0 was released, introducing a plugin mechanism and support for VS Code plugins.
- In 2021, version 2.0 was released, providing a React component-based sidebar plugin mechanism, making it more convenient to customize the UI.
- In March 2022, the project was officially opened to the open source community, and its technology was successfully applied to a variety of products including Alipay’s Mini Program IDE, Alipay Mini Program Cloud Codespaces, Lark Developer Tools, and DingTalk Developer Tools.
- In September 2023, the purely frontend solution CodeBlitz was launched. It is an IDE framework that operates entirely in the browser without backend support and supports key operations such as reading, writing, running, and committing code. It has been integrated into several platforms including Gitlink, Codeup, AtomGit, and Gitee.
Today, OpenSumi welcomes its 3.0 version, aimed at providing AI-native research and development experience for developers entering the era of large models.
AI Native IDE Framework The year 2023 marks an important year in the technological revolution of the AIGC field. Large language models have achieved significant breakthroughs, triggering the birth of many AI-driven applications, especially in the field of R&D. AI development assistants like GitHub Copilot lead the way with functionalities such as automatic completion, code refactoring, code explanation, adding comments, and conversation Q&A, greatly enhancing developers’ coding efficiency and accuracy. Moreover, many large corporations have launched similar AI coding assistant plugins to GitHub Copilot, like Amazon’s CodeWhisperer, Sourcegraph’s Cody, Zhongpu’s CodeGeeX, Baidu’s Comate, Alibaba Cloud’s Tongyi Ling Code, and Ant Group’s CodeFuse. AI-assisted coding has undoubtedly become one of the first fields in large model applications to achieve practicality and commercial value. The success of GitHub Copilot hinges on its foundation on the powerful GPT-4 model and the innovative interaction methods designed for VS Code.
Indeed, without interactive design tailored specifically for coding scenarios, it’s hard to achieve an excellent user experience. For example, with the Inline Chat interaction, GitHub Copilot can pop up an input box within the editor, allowing users to directly converse with the model in natural language while the generated code is visually presented in the editor as a diff. Developers can easily accept or ignore the changes, making the process smooth and coherent.
However, the Inline Chat is currently only available to the official GitHub Copilot plugin. Other Copilot plugins can only send instructions to the Copilot conversation panel, and then developers have to copy or insert the code into the editor from the panel. This approach is limited by the current openness of the plugin API, resulting in an inconsistent user experience for non-official third-party plugins and also reveals market barriers imposed by GitHub Copilot.
While VS Code provides basic API features such as code completion and conversation views, the capabilities of AI extensions are very limited and do not extend well into areas such as running and debugging the IDE, issues panel, terminal, Git, etc. Even the latest Participant API is limited to extending conversations with GitHub Copilot. Such limitations have led to products like Anysphere’s Cursor and ByteDance’s MarsCode having to fork VS Code to achieve customized development to meet their unique needs, which undoubtedly increases the cost of subsequent upgrades and conflict resolution.
The market now urgently needs a highly customizable and extensible AI-native IDE framework. This framework needs to be able to encapsulate AI for areas such as code completion, problem diagnostics, terminal operations, debugging, conversation, and IDE settings, and provide plug-and-play integration. This will allow enterprises to easily build IDE products suited for the era of large models and integrate deeply with internal model services and the DevOps toolchain.
Core Idea: OpenSumi has started its transformation towards AI since July 2023, aiming to upgrade the traditional IDE framework to an AI Native IDE Framework. Its core philosophy can be summarized as:
- A shift in behavior patterns: From issuing commands via command line or graphical interfaces to describing intentions, thus changing the operational approach by combining intent description with multi-turn communication.
- Focusing on “user needs” at its core, integrating tasks relevant to different scenarios. In the old model, users may need to cross multiple platforms to complete development. But in the new model, users just need to articulate clear intentions, and the AI engine will take care of the scheduling and execution.
This requires AI to accurately understand users’ demands, and to analyze tasks and intelligently schedule task execution based on those needs.
As technology continues to advance, the mode of interaction is undergoing revolutionary changes. In this new pattern, the area where editors interact with AI has become the most critical link. In the IDE, less frequently used features such as menus and settings are being weakened, while components like terminals and file trees, which are more read-oriented, are also being toned down. Meanwhile, interaction with AI is being strengthened, to lower the cost of operation switching for users.
OpenSumi adheres to the philosophy of the GitHub Copilot team: “If users have to change their coding style when using GitHub Copilot, that is a bug.” Therefore, OpenSumi has not altered the existing development patterns of developers, but cleverly incorporated AI capabilities on top of these habits, making the use of AI functions more convenient.
In the R&D and application of AI capabilities, OpenSumi 3.0 has been implemented in multiple application scenarios at Ant Group and Alibaba Group, accumulating rich experience in the development of large model applications, which foresees key changes in future development workflows.
Data Preparation
Preparing fine-tuning data is the first step in developing AI features. Effectively utilizing the operational data collected in the IDE to generate quality training data, and preparing evaluation data to assess the effects of AI functions is very important. In the age of AI, application development data is especially crucial.
High-Frequency Scenario Approach
Practice has shown that the less disturbance to users’ habitual usage, the higher the frequency of feature use. For example, the code completion function is the AI feature with the highest usage rate, because users only need to press the tab key to operate it. Therefore, AI functions frequently used have been integrated into the editor, rather than hidden in complicated right-click menus.
Evolution of Metrics
During the promotion of AI features, the observed metrics are also continually changing. Initially, we focused on the scale of AI version usage, and then shifted our focus to the usage rate of AI functions. For example, the usage rate of terminal commands generated by natural language was not high, leading us to optimize the interaction by referencing Warp’s approach and deeply integrating the terminal with AI capabilities; this significantly increased the usage rate. Once the usage rate reaches a certain level, we pursue the adoption rate of various AI features and continue to optimize through engineering, prompt design, fine-tuning, model upgrades, etc., to improve the adoption rate.
Feature Overhaul and Integration Example
In the example of feature integration, the editor module is the most frequently used part when developers use the IDE. For this module, we conducted a comprehensive AI upgrade, making the smart editor functions increasingly perfected.
Those interested in learning more details can visit the OpenSumi’s public repository.
OpenSumi is dedicated to providing intelligent code suggestions without changing developers’ habits. The framework offers a suite of distinctive features:
- Line-level and inline completions: By pressing the
Tab
key, you can quickly adopt the code suggestions presented in the IDE. - inline-chat: Developers can comment on, annotate or perform other quick actions directly in the code editor for selected lines of code, without switching to the conversation panel.
- Code Action: When using refactorings or code suggestions provided by language services, you can find a “light bulb” icon in the upper right-hand corner of the code snippet. OpenSumi integrates AI-related explanations, annotations, and code optimization functions here.
- Rename: When using the rename feature, the system automatically recommends related renaming suggestions.
- Dropdown completions: Rather than sorting alphabetically, OpenSumi sorts completion options by practicality through expanded language service capabilities, helping users find what they need faster.
OpenSumi 3.0 has embedded code completion, eliminating the need to install additional plugins and provides an intelligent auto-completion experience at the line and snippet levels. The advantage of this framework lies in the full use of the extensive contextual data obtained in the IDE environment, including:
- The content before and after the cursor
- The name of the file being edited and the language used
- The developer’s operation history in the IDE
- The code dependencies of the file
- The contextual environment obtained through syntax analysis
Moreover, the IDE can easily access some data that plugins can’t, such as whether to show code hints and whether auto-completion is partially adopted. Through the built-in hint word engineering, this information will be transformed into high-quality hints, providing more precise and relevant completion suggestions.
With the help of configuration capabilities and automation evaluation (an additional code completion test plugin based on the framework), OpenSumi believes that in the code completion scenario, “more” does not mean “better”. Take the following code snippet as an example:
class AIService implements IAIService { // Trigger completion }
Simple code snippet matching may not provide the ideal completion suggestion. Acquiring relevant interface definitions (such as IAIService) in the code context and using their contents as hints for the model will yield more accurate results. In practice, OpenSumi has also summarized some experiences with positive user feedback:
- If the code completion wait time exceeds 1.5 seconds, users may ignore this completion.
- The autocomplete engine should have a maximum search time threshold considering the total number of Tokens in the completion code, server response speed, and request latency, with the time taken not exceeding 300 milliseconds.
When dealing with very large files, to avoid the content placing an excessive burden on computational resources, we recommend truncating the file appropriately before executing the autocomplete engine. Although conventional truncation methods include character, line, or syntax-based truncation, syntax truncation generally performs poorly in completion scenarios due to the lengthy parsing process reliant on tree-sitter
.
In the OpenSumi 3.0
version, we have integrated modular default configurations to optimize the autocomplete engine. In addition, OpenSumi 3.0
has also launched the Inline Chat feature, which is a competitive advantage designed specifically for GitHub Copilot
and is currently not provided by VS Code
. This feature allows developers to interact with AI using natural language for code explanation, comment addition, and test supplementation during code editing.
The Inline Chat feature can pop up an input box directly in the editor interface, allowing developers to accept or reject code changes, and is combined with the Diff view for operations directly next to the code. This design significantly improves work efficiency by avoiding the inconveniences of switching between the code editor and conversation interfaces, thus helping developers maintain focus and flow states.
Simply by selecting the code, the Inline Chat component pops up, allowing users to effortlessly accept, discard, or regenerate suggestions within the editor. Since its release, this feature has undergone three major refactors and iterative updates, aiming to find the perfect balance between user experience and functionality. In the initial design phase, to meet user needs as much as possible, we attempted to integrate a range of functions into the overlay panel, but ultimately found that this approach disrupted the user experience for the following reasons:
- The popup method of the overlay panel when selecting code was intrusive.
- The overlay panel was too large, affecting the reading and understanding of the code.
- The input box was also too large, seriously blocking the user’s view and reducing the code’s readability.
In response to these challenges, we revisited the core value of the feature and deeply considered how to provide help without affecting the user’s workflow and how users could conveniently view the returned code differences. In the second iteration, we implemented the following solutions:
- Optimized and streamlined the floating panel, leaving only the core functions, reducing the interference with users’ work.
- Provided more flexible triggering methods such as selection or using shortcut keys, which enhanced the freedom of operation.
The iteration and update of the latest technology has introduced the inline diff editor feature, through an intuitive interface, users can easily view the code modification suggestions offered by inline chat and compare them with the original code. Users now have the autonomy to choose whether to accept these code suggestions. After the third iteration update of the product, we further optimized the function. Now users can experience a more intelligent display of the floating panel, which is no longer confined to a specific area, but will look for appropriate space near the selected code area to display, greatly reducing the interference with code reading.
After numerous iterations, our introduced inline chat function now supports integration through module API, providing users a better programming experience. Moreover, we have opened the experience to the general users and developers and look forward to their valuable feedback.
In terms of CodeAction enhancements, OpenSumi 3.0 has added a number of features, including automatically recognizing functions and code blocks in the editor, and quickly activating the AI shortcut action entrance. These functions cover adding comments, explaining code, generating unit tests, etc., and are built based on web-tree-sitter and WebAssembly technology. We offer tree-sitter syntax parsing for different languages, which are pre-built as WASM modules, and released to NPM and CDN for easy loading in browsers.
Utilizing AI technology, OpenSumi 3.0 provides intelligent name suggestions when users attempt symbol renaming. This dialog panel-style feature allows users to interact with the AI model, enhancing the intuitiveness and convenience of programming. The AI renaming function uses the context of the code to generate name suggestions that are consistent with the code style.
To enhance the AI model’s understanding of the OpenSumi environment, Ant Group provided it with specialized training, including datasets of commands and settings customized for OpenSumi. Such training ensures that AI can more accurately parse user intent and invoke the corresponding commands and settings within the IDE.
In today’s rapidly iterating software development environment, OpenSumi 3.0 champions an intelligent transformation by adopting the IDE Agent technology as a hallmark shift from traditional plugin architecture to the new era of intelligent agents. With IDE Agent, developers can trigger functions of the integrated development environment directly through simple natural language dialogue, avoiding the need to learn complex operations. This significantly enhances development efficiency and at the same time creates a more intuitive and convenient interactive development space.
Further, the company can develop a range of precise tool Agents that seamlessly connect to platform services, making programming and command invocation natural and smooth. With the advancement of technology, intelligence agents will be able to orchestrate and coordinate tasks of different tool Agents on their own in the future, thus significantly simplifying the software development process.
The OpenSumi IDE Agent is compatible with the VS Code Chat Participant API and also provides additional React components to render conversation cards to enhance user interaction. For example, developers can use a natural dialogue for code commits and pull requests (PRs). With the aid of models, the Git Agent can even automatically generate commit messages as well as titles and descriptions for PRs, greatly simplifying the version control workflow and enhancing work efficiency.
The problem diagnosis process has also been substantially optimized. Previously, developers had to manually copy error information into a search engine or a help panel from the IDE environment and then switch back to the IDE to fix the issue, a cumbersome and inefficient process. Now, OpenSumi 3.0 has introduced a sophisticated error capture mechanism. When exception messages occur in the terminal or debug console, OpenSumi automatically captures them and provides a one-click troubleshooting function. With just one click, developers can quickly lock in on the problem and get a solution, or even the actual fix code.
The integrated error diagnosis and repair solutions save developers’ time, improve efficiency, and reduce the cognitive load during the error analysis and resolution process. It allows developers to focus more on innovation and coding rather than spending time on environment and debugging issues. For the challenge of diversity in exception messages, the front end tsc compilation error capture and resolution strategy demonstrate OpenSumi’s ability to handle diverse and non-uniform exception log information.
We have recently developed a powerful set of API interfaces, specifically designed to meet the diverse business needs for exception information capture. This highly customizable interface allows users to define exception information capture rules according to their own business characteristics. Users can then precisely capture, analyze, and deal with exception messages, greatly improving the efficiency and accuracy of problem handling.
For user convenience, we have integrated a whole set of preset rules applicable to various programming languages in the interface. These rules, established under various common scenarios, can be put into immediate use, and can also be modified and expanded according to users’ own needs.
Below is a code example of using our API to capture exception information:
registry.registerTerminalInlineChat(
{
id: 'terminal-catch',
name: 'catch',
},
{
// Define trigger rules, including preset and custom
triggerRules: [
NodeMatcher,
TSCMatcher,
NPMMatcher,
ShellMatcher,
JavaMatcher,
// Example of a custom matcher
class extends BaseTerminalDetectionLineMatcher {
doMatch(output) {
// Check if the output contains the keyword 'error'
return output.some((t) => t.content.includes('error'));
}
},
],
// Define the execution logic when a match is successful
execute: async (stdout, stdin, rule) => {
// Customize implementation details based on the business
},
},);
Through this code, users can quickly integrate the problem diagnosis functionality into their own products, making the problem-solving process more efficient.
Thanks to OpenSumi’s excellent custom modular design and the powerful terminal rendering capabilities of Xterm.js, we are now able to dynamically monitor and parse terminal behavior by inserting OSC escape control characters into the Shell. As a result, the IDE can sense the real-time status of the terminal, such as when the terminal is prompting for input, when a user is typing commands or characters, and the beginning and end of specific commands. This capability makes traditional terminal interactions more customized and intelligent.
OpenSumi’s terminal state awareness feature offers tremendous potential to revolutionize traditional command-line interactions. When the terminal detects that a user is typing characters starting with “#” in the Prompt state, it initiates an intelligent interaction. Using Xterm.js Decoration technology, the terminal renders an AI interaction popup around the cursor. Users can enter natural language questions, and after the large model processes it, a series of executable Shell commands are returned. Users just need to select the required command from the prompts, send the command to Shell through OpenSumi’s terminal module, and hit enter to execute. This method makes invoking terminal commands through conversation more natural and intuitive.
OpenSumi is continuously optimizing this feature, aiming to enhance the level of intelligence and usability of the terminal autocomplete function, aligning it with the code completion experience within the IDE editor. Heavy users of the Linux Shell will be able to enjoy the convenience AI brings, allowing them to say goodbye to the days of using CheatSheets.
In the regular software development process, resolving code conflicts is often a time-consuming process that relies on the subjective judgment of developers. The internal code hosting platform of Ant Group has fine-tuned the AI model using a large amount of online conflict resolution data and has now reached an accuracy rate of about 75%. Developers can now use OpenSumi 3.0 to quickly set up intelligent conflict resolution scenarios, which not only significantly reduces the workload of manual conflict resolution but also increases the speed and efficiency of solving conflicts.
Another important update is the introduction of OpenSumi Design in OpenSumi 3.0. This design standardizes the components and UI in the IDE, not only emphasizing an AI-native interaction style but also offering a fresher, more modern, and highly customizable user interface design. The light-color style skin included in OpenSumi Design optimizes the user experience.
The highlight of this update also includes the upgrade of CodeBlitz, OpenSumi’s container-free, pure frontend IDE solution. CodeBlitz, which incorporates AI technology, has seen significant improvements. In addition to its original powerful code “reading” capabilities, CodeBlitz now also provides excellent code completion functions with AI, supporting backend language code completion like Java, Go, C++, even when running in a pure browser environment.
The AI Native version of CodeBlitz has undergone a significant upgrade in communication protocols. OpenSumi, which previously used the same communication scheme as Eclipse Theia, mainly relies on the vscode-jsonrpc library for communication. This library is based on the JSON RPC 2.0 protocol and is used for implementation of Remote Procedure Calls (RPC) between the IDE client and server, as well as the plugin side. JSON RPC is a concise and lightweight protocol that allows data to be exchanged in JSON format, making data transmission simple.
However, in the OpenSumi 2.0 version, although the JSON RPC protocol is easy to use, it also brought some performance challenges:
- The readability of the text format leads to inefficiency. The human readability and simplicity of JSON do not cater to efficiency. For instance, numbers may contain redundant spaces or formatting characters, and strings may be filled with escape characters, which increase the redundancy in data transmission.
- Massive data transmission causes trouble. As JSON uses text format, its size is usually larger than binary formats, which means more network bandwidth is occupied when transmitting large amounts of data, thus reducing transmission speed.
- The overhead of parsing and serialization. The process of serializing and deserializing JSON data consumes time and computational resources, which has a significant impact on system performance in scenarios like IDEs with high-frequency message exchange.
These issues are especially prominent in Cloud IDE scenarios, leading to frequent network latency and difficulties in loading large files, thereby affecting the user experience of Cloud IDE.
To address these challenges, OpenSumi 3.0 version has undergone a thorough reconstruction of its communication part, removing the dependency on vscode-jsonrpc, and rewriting the OpenSumi RPC. OpenSumi RPC utilizes Fury—a high-performance, multi-language serialization framework based on JIT dynamic compilation technology that was open-sourced by Ant Group in 2023 (now donated to the Apache Foundation Apache Incubator Fury), which transforms the original text-based communication protocol into a binary-based communication method.
This greatly improves communication performance between the front-end and back-end, and between the back-end and plugin processes, especially when dealing with large and binary files, where the communication speed has increased a hundredfold, thereby significantly improving the quality of user experience.
The binary serialization scheme provides a way to convert data structures like object arrays into a compact and efficient binary format. This scheme is particularly suitable for network transmission, persistent storage, and inter-process communication, to help in the efficient transfer of data and the rapid reconstruction of raw data. Utilizing byte bit encoding, the binary format encodes specific information on certain bits of the byte, for example, the first few bits of a byte might represent the version of the serialization protocol and control header information, followed by bits that represent field types.
In this serialization scheme, the clear specification of data types is necessary. The serialization mechanism improves runtime parsing speed through compilation and code generation, avoiding real-time type inference and conditional judgments. However, because binary serialization directly handles byte streams without the need to encode or decode characters, there is almost no performance loss when operating byte streams and character streams, and the volume of binary data is relatively small.
Front-end and back-end communication of OpenSumi 3.0:
For OpenSumi, we plan to independently package opensumi-rpc
in the future, providing a new communication choice for the plugin and language services. Performance-wise, for front-end requests to the back-end for different types of content, three examples demonstrate the performance difference between the two RPC methods. We also optimized network transmission with Fury, by quickly determining the packet type and the specified target process through reading the first few bytes of the packet, thereby improving parsing efficiency.
When dealing with small data (such as 10k, 50k), we have achieved a hundredfold speed improvement compared to JSON. The effect is even more significant with larger data processing. These optimizations enable OpenSumi RPC to support the transmission of files over 200M and support streaming, greatly enhancing the development experience.
OpenSumi 3.0 Base Dependency Upgrades:
OpenSumi 3.0 has updated many of its underlying dependencies, including but not limited to the following components:
- Monaco has been upgraded from 0.35.0 to 0.47.0
- React has been upgraded from 16.8 to 18.2.0
- Mobx has been upgraded from 5.9.4 to 6.12.0
- Webpack has been upgraded from 4.39.3 to 5.90.0
The updates to the base components have enabled us to implement more functionalities at the framework’s core, for example:
- An automatic folding feature for unchanged code, improving the experience of reading Diff code.
- Enhanced code-completion features and a more perfect Toolbar design.
- Optimizations to a series of internal built-in components, such as Modal, Dialog, Popover, Notification, etc.
An instance with a WebAssembly runtime called opensumi.run
has been opened up, allowing developers to emulate operating system execution of Node.js projects within a browser environment.
On the path toward technological innovation, Alibaba and Ant Financial are always at the forefront of the industry. To cater to internal demands for modern development tools, we have developed a POSIX-compatible runtime environment based on WebAssembly technology. This platform, temporarily named WebC, is not only compatible with Node.js 16 but also integrates commonly used command-line tools such as bash, libgit2, and can support a part of applications compiled to the wasm format.
Combining the pure frontend version of OpenSumi and the powerful capabilities of WebC, we have developed China’s first pure frontend online integrated development environment (IDE) based on WebAssembly technology, named opensumi.run. This online IDE offers enhanced service support through plugin processes. Website: https://opensumi.run/opensumi/run
Compared to other pure frontend editor products on the market, users may notice that the TypeScript/JavaScript language service functionality in these editors is not satisfactory. This is because they run entirely in the browser, without full file system support, especially the lack of synchronous calling functionality, and thus can only rely on the WebWorker version of tsserver (TypeScript’s Language Server). This constrained version of the language server offers very basic functionalities, and if users try to jump from one TypeScript file to another, it is very likely to result in errors. The reason for this is that the language server has not fully indexed the project, unable to determine whether the target file is bar.js, bar.ts, bar.tsx, or bar.jsx, thus the editor might try to open a non-existent file without suffixes.
For OpenSumi, there are two plugin environments: one runs in the local Node.js environment, launched by the IDE Server side, and has access to the full Node.js API. As such, most CloudIDE plugins run in this environment. The other environment is within WebWorkers, which cannot directly access the system APIs and can only make asynchronous calls through the main thread. This limits the types of plugins that can run in WebWorkers, usually those that are environment-independent, such as text editing, style decoration, or enhancements to the IDE interface.
In light of this, based on the inherent Node.js environment of WebC, we can allow the Node.js version of the plugin processes, originally launched by the OpenSumi Server side, to run seamlessly within the processes simulated by WebC. This approach eliminates the traditional forwarding server, allowing plugin processes to communicate directly with the main thread via the browser’s MessageChannel, reducing communication costs and enhancing efficiency.
WebContainer-based Pure Front-End IDE Plugin processes FaaS development scenarios demonstrate that in early versions without the concept of a file system, pure front-end solutions struggled to achieve efficient file searching. However, with the introduction of a file system based on WebComponent technology, coupled with a minimal code optimization, the goal of fast full-text search was successfully achieved, without even the need for the ripgrep tool.
Moreover, opensumi.run already includes built-in GitLens and basic Git functionality, currently provided as a technical preview. Subsequently, we plan to further optimize the experience between WebContainer and opensumi.run, and consider open-sourcing the entire WebAssembly runtime.
Regarding the plugin marketplace, the migration to the Alipay Mini Program CloudPlugin Marketplace has made downloading plugins more stable.
Community IDE SIG A project named OpenSumi is collaborating with OpenAnolis (Dragon Lizard Community) and the OpenEuler Community to establish an IDE SIG group, aiming to promote technical exchanges, information sharing, and resource co-building in the domestic IDE field. The establishment of the Dragon Lizard Community IDE SIG Group has made a positive contribution to this.
The Rise of AI Agents: With the launch of Devin and the GitHub Copilot Workspace, AI Agents have become the copilots for human developers, assisting in addressing issues and demands. Future developers will define objectives, such as adding new interface features, and AI Agents will independently handle the entire development task set, from code writing to deployment, within the proficient Workspace environment, becoming the key link in the software development lifecycle.
Following Google’s unexpected disbandment of the entire Python team, the founder of PyTorch publicly expressed dissatisfaction, asserting that this core language team is irreplaceable.
Germany is embracing Linux once again, deciding to migrate tens of thousands of systems from Windows to Linux, hoping to avoid the same mistake made twenty years ago.
The incident where a system bug resulted in the imprisonment of hundreds of people has once again drawn attention. Despite investing hundreds of millions of yuan for repairs, the technical dilemma of failing to successfully migrate to the cloud continues to plague some large enterprises.
Developers question whether the ecosystem of the Rust programming language is being overly hyped. One developer explained that writing one hundred thousand lines of Rust code over three years has led him to doubt whether his initial decision to use Rust was simply a result of being misled.
Microsoft officially extends an invitation for you to attend the upcoming Azure OpenAI Day on May 17th. The event will take place in Chengdu, where you will have in-depth conversations with technical experts from Microsoft on the latest technological topics, including Sora, GPT-4 Vision, and Azure OpenAI. Seats are limited, don’t miss out!