Comments (11)
Should we also rework the preprocessor? I think the current cfront with the compiler directives and macro expansion might cause some problems in your working.
Yes, I'm currently investigating on it, current implementation is highly unreasonable and may produces some unexpected side effect, I'll try to handle preprocessor directives in parser in a more consistent way. (token based parsing instead of manual string parsing)
For the rework implementation progress, I'll keep update here.
Edit 1: To completely avoid post effect, I plan to completely extract it out from lexer and parser, which expands to another file, then re-read into token pipeline in lexer and pass to parser.
Edit 2: After discussion with Jserv, shecc will not have a separated preprocessor, instead, I'll focus on making it consistence in token-based parsing form.
from shecc.
For the rework implementation progress, I'll keep update here.
After successfully finishing #85 and #89, @vacantron will focus on improving the SSA IR and its related compilation process. Next, he plans to devote attention to #88, which involves an optimization phase based on SSA. In the meantime, it presents an ideal opportunity to revise and rework cfront.
from shecc.
I recently discovered a small-scale project laroc designed to create a C99 compiler for RISC-V. This project is intended to serve as a reference for improving the frontend and code generation aspects of C compilers.
from shecc.
Currently cfront is using scanless parser with IR emitter binds into it, and it contains ~3000 LOC. But based on my contributions experience to industrial grade programming languages (V Lang in this case), shecc's frontend parser is lack of ease to either debug or for others to contribute (Even though shecc is meant to be educational).
I have undertaken the task of reworking my earlier compiler project, AMaCC, with a primary focus on its educational value and potential extensibility. I wholeheartedly acknowledge the limitations of the current C front-end implementation, including the absence of a robust AST and proper modularization.
At the same time, @vacantron is dedicated to introducing the SSA based IR following his initial efforts in register allocation. I want to make sure that we avoid any significant conflicts of interest when it comes to the reworking of the C front-end. Could you please consider proposing a plan for submitting pull requests that involve minimal changes?
from shecc.
No problem, for the minimal changes, I would like to try separate current parser into lexer and parser, so we can keep all lexical analyzing and grammar parsing separate and also keeps all IR related functions stays in the same place in cfront. This will only extract lexical analyzing functionality from cfront and it passes token stream to parser.
That sounds promising. You can track the ongoing migration to an SSA-based IR in pull request #85.
from shecc.
I would like to know the reason not to have a separated pre-processor. Current pre-processing logic is mixed in both lexer and parse as special cases. If pre-processing is separated, the implementation may benefit from fewer states in lexer and less additional conditions in parser. Also new features (if planned), such as string concatenation in macro (
##
), might be easier to add with separated pre-processing. Or other parsing algorithms (like a more error-resilient one) could be easily tried out since dealing with pre-processing is not in need.
This project draws inspiration from AMaCC, which in turn was influenced by the remarkable c4. All three projects share a common theme of minimalism, emphasizing self-bootstrapping without the need for external tools. This is precisely why this project eschews the use of separate assemblers and linkers, despite being a cross-compiler. Unlike mature compilers like GCC and LLVM, where the C preprocessor (cpp) is a distinct program, in our project, cpp is integrated into the lex/parser. This approach aligns with our minimalist design philosophy. While this integration adds complexity to the existing C front-end, I believe that the benefits of a more unified design principle justify this complexity.
from shecc.
No problem, for the minimal changes, I would like to try separate current parser into lexer and parser, so we can keep all lexical analyzing and grammar parsing separate and also keeps all IR related functions stays in the same place in cfront. This will only extract lexical analyzing functionality from cfront and it passes token stream to parser.
from shecc.
Should we also rework the preprocessor? I think the current cfront with the compiler directives and macro expansion might cause some problems in your working.
from shecc.
- Discard scanerless parser, rewrite it into both lexer and parser.
#92 is the starting point for this task.
from shecc.
Edit 2: After discussion with Jserv, shecc will not have a separated preprocessor, instead, I'll focus on making it consistence in token-based parsing form.
I would like to know the reason not to have a separated pre-processor. Current pre-processing logic is mixed in both lexer and parse as special cases. If pre-processing is separated, the implementation may benefit from fewer states in lexer and less additional conditions in parser. Also new features (if planned), such as string concatenation in macro (##
), might be easier to add with separated pre-processing. Or other parsing algorithms (like a more error-resilient one) could be easily tried out since dealing with pre-processing is not in need.
from shecc.
As of the merge of #111, the work on cfront's job is considered temporarily completed, but still, I will leave this issue open for the following reasons:
- The viability of seperation of parser and lexer is doubtful since memory usage will increase, if the source file's tokenization phase is completely done before syntactic analysis phase and the token information is stored in struct form.
- As mentioned above, the tokenization strategy may have to be heavily changed due to the different parsing strategy used in cpp (C preprocessor for short) and the C language itself. More precisely, the newline (
\n
or\r
) or the backslash (\
) character may needs to be consider as an valid token in order to successfully parsed. Additionally, the token aliasing strategy will requires previous changes to be done (see #107). - The preprocessor syntax validation, specifically, unused-token-after-expression validation, is unimplemented due to reason 2.
Generally, these issues requires addition investigation to be done in order to be resolved.
from shecc.
Related Issues (20)
- Integrate with semu HOT 5
- Parse syntax for include macro in parser.c HOT 5
- Uninitialized variable: pred HOT 2
- The peephole optimization breaks the macro expansion HOT 2
- Declare variables where needed
- Fail to self-host HOT 1
- Support macros defined in <stdbool.h> HOT 5
- Unable to self compile stage 1 HOT 1
- Fail to pass stage1 HOT 6
- For a coding question: about parser HOT 3
- Enhance the implementation of division emulation in the Arm backend
- Support conversion specifier ā%cā inside printf HOT 1
- Support mmap on shecc HOT 2
- Improve intermediate representation and also register allocation
- Handle non-zero integers in if statements HOT 1
- Determine the factors contributing to unexpected slowdowns during self-hosting HOT 2
- Eliminate compilation warnings
- Implement basic optimizations HOT 3
- High branch-miss rate when hosting shecc on the Raspberry Pi 3B
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
š Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ššš
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ā¤ļø Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from shecc.