Converting XText grammars into Antlr4

Recently, a developer asked a question about using an Antlr grammar generated from an XText grammar. The grammar, which was added to a Git version control repo, was generated from an XText file elsewhere in the Git repo. Besides the fact that the generated grammar was completely out of sync from the XText source, it seemed like it should compile using the Antlr3 tool, and generate a parser that should compile, but it didn’t. The Antlr3 grammar contained action code for tree construction using XText libraries.

Continue reading
Posted in Tip

Parser generators on the Web

In my implementation of Trash on the Web, I collected a list of some online parser generators. There aren’t many, and most have a restricted set of features. For example, REx is probably the best in conversion and parser generation, but it doesn’t have basic grammar analysis. Many of the sites do not offer EBNF input, and none of the sites tell the user what the input syntax is.

Continue reading
Posted in Tip

Writing a universal programming language extension for VSCode

There was an interesting question in Stackoverflow today: Is it possible to write an extension for VSCode that can accept any grammar and support semantic highlighting, defs/refs, and other language features?

I’ve always toyed with the idea of having a “universal language server” with the server dynamically reading a grammar then parsing the input.

So, I decided to write such an extension. It took me about four hours to get something working, but I would assume some could write something faster and cleaner.

Is such an extension possible? The answer is yes, but writing one depends on several factors: Have you written a VSCode extension before? Are you starting from an existing implementation? Are you a language tool expert? If your answer “no” to either of these questions, it could take a bit of time to write.

I started from the AntlrVSIX extension for VSCode that I already wrote a year or two ago. The server is in C# and client in Typescript.

The server code originally required a month or two to write because there weren’t any good examples to model. In addition, a bit of infrastructure need to be written because the server-side APIs for LSP did not implement version 3.16 of Language Server Protocol, which includes “semantic highlighting”.

For the grammar specification, one has to put a stake in the ground and decide what parser generator to use. There are many generators, but I used Antlr4 because that is what I am familiar with. And, while Antlr4 can target C#, the parser it outputs requires a driver, and the code compiled. Therefore, I assumed this would be done in a step prior to running VSCode. “Users” can write a new grammar, generate a parser, build, then tell the extension where the parser is located.

I’ve noted that other parser implementations exist. One “fast” implementation in JavaScript is Chevrotain. Unfortunately, the rules are specified in JavaScript, so while it would be easy to convert an EBNF-based grammar like Antlr or BNFC to Chevrotain, scraping the code to extract the grammar from Chevrotain syntax would be difficult. That’s because one can modify the rules on the fly.

The output of a parse is a parse tree. But a grammar in itself is insufficient to classify tokens in some file.

In addition to the grammar, you will need to declare classes of symbols. Here, I used XPath expressions, which is an expression to match a particular parse tree node. The reason for XPath expression is because we want the context of a token to be used for classification. For example, this grammar-based editor selects tokens from the lexer for a classification. But, this ignores the fact that in many languages, tokens have different meaning because they occur in different contexts in the parse. If you ignore the parse tree context that the token occurs within, you may as well just use Textmate.

Note, a Textmate language spec is an alternative solution, but Textmate grammars are difficult to implement. A Textmate pattern is either a single regular expression, or a “dual-level” regular expression pattern. As far as I know, there are no tools to convert a context-free grammar with class patterns into a Textmate specification. My gut feeling is that an XPath expression into a “dual-level” pattern, but not always.

Putting it all together: Java as an example

Let’s start with the Java grammar. To get this extension to work, you will need to clone the grammars-v4 repo, and build a parser for the Java grammar.

git clone
cd grammars-v4/java/java
dotnet build
cd ../..

Next, we need to set up the three ~/.grammar-* files. In ~/.grammar-location:


In ~/.grammar-classes:


Next, we need to clone the extension code, build it, then run VSCode.

In ~/.grammar-classifiers:


Finally, we need to clone the extension code, build it, then run VSCode.

git clone
cd uni-vscode
dotnet build
cd VsCode
code .

Semantic highlighting

I tried it out for Java, and have five classes of symbols defined. Here’s how the editor displays a Java source file with the standard Java extension.

This is how the editor displays the same Java source file with the “universal language server”.

Hover, Select, Defs/Refs

This implementation does support mouse hover pop-ups. The information displayed is just the classes that the symbol belongs to.

Select (aka “TextDocument/DocumentHighlight”) only selects one symbol because there is no symbol table implemented. Similarly, the Defs/Refs of a symbol only fetch the one symbol selected.

Implementation problems

  • Tagging is slow: O(Number of tree nodes * number of Classifications). This is because each classifier is run separately with the XPath engine.
  • Defs/Refs/Select only handle one symbol.
  • The location of the grammar, classes, and classifiers should be in one file.
  • The server can only handle one grammar per VSCode session.

–Ken, July 9, 2021

Posted in Tip

Testing of Antlr language targets

A few weeks of work, and I finally have a driver generator that can test any grammar with any target.

alias make='mingw32-make.exe'
for i in CSharp Java JavaScript Dart Python3
    echo $i
    rm -rf Generated
    dotnet-antlr -t $i
    pushd Generated
    mingw32-make.exe run RUNARGS="-input 1+2 -tree"
    mingw32-make.exe clean
Posted in Tip

Forking a Git repo more than once

Github does not support forking a repo more than once. If you try to do that, it’ll just redirect to your original fork. That could be a problem if you have an outstanding pull request on a branch you want to modify for an unrelated change. Rather than muck around with the forked repo, create a new for this way.

  • git clone antlr4-pr2
  • In Github, create a new repo antlr4-pr2.
  • git remote rename origin upstream
  • git remote add origin
  • git remote -v
  • git push -v origin master

The main problem with this approach is that Github imposes a lot of restrictions on what you can do. Github “pull requests” cannot be performed from repositories that haven’t been explicitly “forked” through the UI. It might be best to follow the normal workflow: fork the repo, clone it, “git branch foobar”, “git checkout foobar”, then make your changes and checkin and publish from the Github Desktop app.

Posted in Tip

System Service Exception BSOD

I’ve been seeing BSOD, and did a dism and sfc. Looks like there were some problems…

C:\WINDOWS\system32>dism /online /cleanup-image /restorehealth

Deployment Image Servicing and Management tool
Version: 10.0.19041.572

Image Version: 10.0.19042.685

[==========================100.0%==========================] The restore operation completed successfully.
The operation completed successfully.

C:\WINDOWS\system32>sfc /scannow

Beginning system scan. This process will take some time.

Beginning verification phase of system scan.
Verification 100% complete.

Windows Resource Protection found corrupt files and successfully repaired them.
For online repairs, details are included in the CBS log file located at
windir\Logs\CBS\CBS.log. For example C:\Windows\Logs\CBS\CBS.log. For offline
repairs, details are included in the log file provided by the /OFFLOGFILE flag.


Posted in Tip

Release 8.3 of Antlrvsix

After two months of work, I’m a few days away from making release 8.3 for Antlrvsix. Most of the changes actually pertain to Trash, the command-line interpreter shell contained within Antlrvsix, but there are a few other important changes to the extension itself.

Release 8.3 features an all-new input/output pipe implementation between commands in Trash. It uses a JSON serialization of parse trees between commands. The JSON serialization also contains the parser, lexer, token stream, file contents, and file location. Basically, it includes everything that one would need for a command to manipulate a parse tree. The purpose of the JSON file is so that each command can now be implemented “out-of-process”, meaning that Trash can now be replaced by Bash and each command implemented as an independent program in any language of choice. Nobody wants yet another monolithic program to implement programming language tools. This release works towards an open system. With this release, all the commands and Trash still are implemented in one program, but it will be switched over in a month or two.

I’ve also included a few new goodies:

  • Added a grammar diff program.
  • Added an ISO 14977 parser.
  • Added AGL (Automatic Graph Layout), XML, JSON output.
  • Added Cat, Echo, Wc commands.
  • Adding BNFC’s LBNF and other associated grammars. NB: This work is not yet complete, and only works for the simplest of grammars.

For ISO 14977, it was something that I decided to implement a while ago. But I didn’t know what I was getting into, and really should have read what D. Wheeler wrote about the spec. While it is now almost done, I learned along the way that the spec has several problems. One error is that the symbol meta identifier cannot contain spaces (meta identifier = letter, (letter | decimal digit);), yet throughout the spec–and meta identifier itself–meta identifier should allow spaces! And, as Wheeler pointed out, there are many other problems. Yes, grammars in Iso 14977 are very verbose…”a sea of commas”. But, it does have some interesting features, and so worth adding a parser for it.

The “diff” program I implemented with this release is interesting. I used the Zhang-Shasha tree-edit distance algorithm, extending it to record the actual tree edits that correspond to the minimum tree-edit distance. This algorithm is, unfortunately, for an ordered tree, so it works best for small differences. I will be trying to implement other algorithms in the next month or two. There is certainly a lot that could be done here. One important type of difference is to include no only simple single-node inserts and deletes, but more complex operations like fold and unfold.

In addition, with this release, I’m disabling semantic highlighting for VS2019. This is because it’s buggy and slow, and despite my warning people, they complain about it being buggy and slow! Use the extension for VSCode. It’s really very good. In the next release, I will try to fix Antlrvsix for VS2019, but you never know: Microsoft needs to implement semantic highlighting in its LSP client for VS2019.


Posted in Tip

Antlrvsix and Trash release v8.2

Released yesterday is version 8.2 of Antlrvsix and Trash. This release enhances the Trash shell further, making it look more and more like a full-fledged analogy to the Bash shell. However, Trash uses the lingua-franca of parse trees and XPath. In this release, one can now pipe output–parse trees–between commands. The find command has been renamed to xgrep to further the analogy with grep. Various output commands have been added, such as st and text. File globbing for ls, cd, parse and other commands has been rewritten to be more like what you would see in Bash. I’ve added a run command to generate, build, and run parsers.

All in all, I think Trash is finally becoming the tool I’ve envisioned for language development.

Posted in Tip

Updates to Antlrvsix

Many months ago, I had VSCode working with Antlrvsix. Back then, rather than release what I had, I decided to put it off because I was concentrating my effort on getting the server capabilities expanded. But, the main problem why I didn’t release a VSCode extension was that I could not support “semantic highlighting” in my server in a standardized way because the API I was using did not support it. Since then, the server capabilities have been enhanced. But, more importantly, I changed the API to get semantic highlighting in Antlrvsix to work with VSCode. I have now released Antlrvsix for VSCode to the Microsoft Marketplace for VSCode.

To get semantic highlighting working with VSCode, I decided to write a drop-in replacement for Microsoft’s Microsoft.VisualStudio.LanguageServer.Protocol API. While Microsoft does make a release of the API every three months or so, there are many features missing that have been in the LSP spec for years. Semantic highlighting is a crucial new addition, but I have no confidence that Microsoft will ever implement it based on the changes I’ve seen over the last year. This drop-in replacement is the current version with additions for semantic highlighting.

On the grammar transforms, I have a script for the “Trash” command-line tool of Antlrvsix to optimize the Java9 grammar partially. The transforms for expressions aren’t yet there, but so far, the optimized grammar parser works several times faster.

Posted in Tip