MCQs on Compiler Design

Do not miss this exclusive book on Binary Tree Problems. Get it now for free.

In this article, we list 50+ Multiple Choice Questions on Compiler design.

Questions.

1. What types of programming languages do programmers and other users use to write code?

Low-level languages.
High-level languages.
Machine Code.
Mid-level languages.
Normally programmers use high-level languages since they are easier to read, write, debug and maintain.

2. Which of the programs takes high-level code and transforms it into assembly code readable by a machine?

Assembler.
Linker.
Compiler.
Interpreter.
A compiler is a program that accepts code written in a high-level language such as Java, C/C++ and transforms it into assembly code which are instructions that can be executed by a computer processor.

3. What is the output obtained from an assembler?

Object file.
Data file.
Task file.
Program file.
An assembler emits object code. These usually have a .obj or .o file extension.

4. Which program groups characters in code into tokens in a compiler?

Parser.
Scanner.
Code generator.
Lexer.
A scanner takes a sequence of characters from the source code and produces a sequence of tokens that are fed to the compiler's parser.

5. How many types of parsing are there?

2
3
4
5
These are *top-down* and *bottom-up* parsing. The former looks at the top level of the parse tree and works it way down the parse tree using the grammar rules. The latter does the same except that it starts from the bottom of the parse tree.

6. Among the following parsers, which is the most powerful?

Operator precedence.
Canonical LR.
SLR.
LARL.
A canonical LR parser is an LR(k) parser for k=1, i.e. with a single lookahead terminal. It is the most powerful among the rest.

7. What is parsing source code into syntactic classes?

Syntax analysis.
Lexical analysis.
Scanning.
Interpretation.
Lexical analysis takes source code and breaks them down into tokens/syntactic classes.

8. What is the result of a bottom-up parse?

The left-most derivation.
The right-most derivation.
The right-most derivation in reverse.
The left-most derivation in reverse.
A bottom-up parser results in the right-most derivation in reverse. An example of a bottom-up parser is the LR parser

9. What is another name of bottom-up parsing.

Shift-reduce parsing.
Recursive descent parsing.
Predictive parsing.
Backtracking.
Shift-reduce parsing involves two steps for bottom-up parsing. These are the shift-step and the reduce-step.

10. What is the best data structure to implement a symbol table?

Tree.
Self-organizing list.
Hash table.
Linked list.
A hash table has a constant time access O(1) and thus very efficient for implementing a symbol table.

11. What is the result of top-down parsing?

The left-most derivation.
The right-most derivation.
The right-most derivation in reverse.
The left-most derivation in reverse.
The result of top-down parsing is the left-most derivation. An example of a top-down parser is the LL Parser

12. Syntax analysis is modeled by?

Language syntax.
Context Free grammar.
Regular grammar.
Regular languages.
A CFG specifies the syntax of a programming language.

13. Among the parsing techniques, which avoids backtracking?

Top-down parsing.
Predictive parsing.
Recursive-descent parsing.
Recursive-descent parsing & predictive parsing.
Back tracking mean that if one derivation of a production fails, the syntax analyzer restarts the process using different rules of same production. Both recursive-descent parsing and predictive parsing avoid backtracking, this is why they are efficient.

14. Which symbol table implementations is based on the property of locality of reference?

Linked list.
Tree.
Hash table
Self-organizing list.
This means that in a given interval of time, only a smaller subset of the list is probabilistically most likely to be accessed. This is improves the efficiency of a linear search and thus achieves a near constant time access in the best case.

15. Which of the errors can be analyzed by a compiler?

Syntax errors.
Logical errors.
Semantic errors.
Runtime errors.
Among the above, the compiler can only detect syntax and semantic errors in the code.

16. In which phase in compiler design do we check the grammar of the programming language?

Lexical analysis.
Semantic analysis.
Syntax analysis.
Code generation.
We check the grammar of the programming language during the second phase in compiler design which is syntax analysis.

17. Among the following components, which is essential in semantic analysis?

Lex
Symbol table.
Type checker.
Yacc.
Type checking involves verifying that each operation executed in a program is in accordance with the type system of the programming language.

18. In a parse tree, what do leaf nodes indicate?

Non-terminals.
Sub-terminals.
Terminals.
None of the above.
A parse tree is a hierarchical representation of terminals and non-terminals. Interior nodes indicate non-terminals while leaf nodes are terminals.

19. What is the function of intermediate code?

None of the above.
To make errors easily detectable.
To eliminate the need of a new full compiler for every unique machine.
For semantic analysis.
explain here

20. Which compiler design phase is also referred to as the scanner?

Syntax analysis.
Semantic analysis.
Lexical analysis.
Code generation.
In the lexical analysis phase, the lexical analyzer breaks programming language syntax into tokens, by removing any whitespace or comments in the source code. It is also referred to as the scanner.

21. Which compiler design phase is also referred to as the parser?

Lexical analysis.
Semantic analysis.
Syntax analysis.
Code generation.
Syntax analysis is also referred to as parsing. The syntax analyzer checks the syntactical structure of the input and verifies its correctness. This is done by building a Parse tree or Syntax tree.

22. How many major types of compiler optimizations are there?

4
6
2
None of the above.
There are two major compiler optimizations, the first if machine-dependent and the second if machine-independent.

23. Choose an optimization technique that reduces multiple jumps?

Constant propagation.
Loop optimizations.
Peephole optimization.
Dead code elimination.
Peephole optimizations involve improving the performance of the target program by examining short sequences of target instructions and replacing them with shorter/faster sequences. This includes reducing multiple jumps.

24. Which among the following is an abstract form of intermediate code?

Object code.
Byte code.
3-address code.
None of the above.
Three-address code is an intermediate code representation used by optimizing compilers to aid in the implementation of code-improving transformations.

25. Which of the symbols is unrelated to context-free grammars?

Start symbol.
Terminal symbol.
End symbol.
Non-terminal symbol.
A CFG consists of start symbol, a set of terminals and non-terminals and a set of productions.

26. Which of the compilers is used to generate code for multiple machines?

Two-pass compiler.
Decompiler.
Cross compiler.
One-pass compiler.
A cross compiler is capable of creating executables for different platforms other than the one on which the compiler is running.

27. Which is not a function of a shift-reduce parser?

Shift.
Reduce
Jump.
Accept.
A shift-reduce parser has shift, reduce, error and accept actions.

28. Which of the grammars produces multiple parse trees for the same string?

Regular.
Non-regular.
Ambiguous.
Unambiguous.
Ambiguous grammar is CFG that produces multiple leftmost or rightmost derivations for the same specified string.

29. Which of the methods merges two loops into a single loop?

Constant folding.
Constant propagation.
Loop fusion
Loop rolling.
Loop fusion also called loop jamming is a compiler optimization and loop transformation where by multiple loops are replaced with a single loop.

30. Which of the grammars can be translated into a DFA?

Generic grammar.
Left linear grammar.
Right linear grammar.
None of the above.
Right linear grammar is a grammar whereby the non-terminal symbol is at the right side of the production.

31. What does a compiler do upon encountering an error during compilation?

Produce warnings.
Terminate compilation.
Complete reading the program.
Terminate program.
When there are issues with the source code either syntactic or semantic errors, the compiler completes reading the program but wont produce machine code.

32. What does a interpreter do upon encountering an error during interpretation?

Produce warnings.
Produce wrong results.
Terminate execution and report errors.
Complete reading the program.
When there are issues with the source code, the interpreter will immediately stop before proceeding to the next instruction.

33. Which among the following provides input for a compiler?

Linker.
Loader.
Preprocessor.
Assembler.
For example in C/C++, a pre-processor will remove all #include directives and #define directives. Generally if function entail, file inclusion, augmentation, macro-processing etc.

34. What links parts of a program together for execution?

Assembler.
Loader.
Linker.
Preprocessor.
This is code that links, merges object files together to make an executable file.

35. What is the output of a Java compiler?

Object code.
Executable file.
Byte code.
Library.
A Java compiler compiles Java high-level code into platform-neutral Java bytecode that can run anywhere.

36. An optimizing compiler,

is optimized to take less execution time.
is optimized to take less space.
optimizes code.
none of the above.
explain here

37. Why do we remove left recursion?

It generates ambiguous grammar.
It generates unambiguous grammar.
It causes an infinite loop.
All the above.
Left recursion is a problem during parsing. It can either leads to an infinite recursion. We solve this by parsing grammar that is preprocessed to eliminate the left recursion.

38. A program that sets up an executable in memory for execution is referred to as a?

Linker.
Assembler.
Loader.
Interpreter.
A loader is responsible for loading an executable into main memory for execution. It does this by calculating the size of a program the creating the needed space in memory for it.

39. What is the purpose of predictive parsing?

To implement backtracking in a top-down parser.
To avoid backtracking in a bottom-up parser.
To avoid backtracking in a top-down parser.
None of the above.
The purpose of predictive parsing is to build top-down parser that never backtracks. For this, we transform a grammar by eliminating left recursion, and performing left factoring.

40. question

option
option
option
option
explain here

41. Which among the following is used for basic block analysis?

Flowgraph.
Parse tree.
DAG.
Syntax tree.
A DAG(Directed Acyclic Graph) is a tool used to determine the structure of basic blocks. It allows us see the flow of values flowing among the basic blocks. It is also useful for optimization.

42. What do we call replacing expensive operations with cheaper ones?

Loop optimization.
Code motion.
Strength reduction.
Loop invariant computation.
During compilation, we have expressions that consume more system resources such as CPU cycles, time, and memory. We replace such expressions with cheaper expressions without compromising the output of expression. This is referred to as strength reduction

43. What do we call a parse tree that shows attribute values at each node in the tree?

Abstract Syntax Tree.
Syntax tree.
Annotated parse tree.
Annotated syntax tree.
This is a parse tree with the attribute values at each node for given input string. It is also referred to as a decorated parse tree.

44. In a DAG, what do interior nodes represent?

Identifiers.
Names.
Operators.
Constants.
In a Directed Acyclic Graph(DAG), leaf nodes represent identifiers, names or constants while interior nodes represent operators.

45. Among the following parsers, which is a top-down parser?

LR(k) parser.
LALR(k) parser.
Recursive descent parser.
Operator precedence parser.
A recursive descent parser is a top-down parser built from a set of mutually recursive procedures whereby each procedure implements a non-terminal of the given grammar.

46. Which among the following is enough to convert an arbitrary CFG into an LL(1) grammar?

Eliminating left recursion.
Factoring the grammar.
None of the above.
Eliminating left recursion and factoring the grammar.
To convert an arbitrary CFG into an LL(1) grammar we first remove ambiguity, then remove left recursion and finally make the grammar deterministic using left factoring. If the grammar is ambiguous, left recursive and has left factoring, it is not an LL(1) since for LL(1) all three are removed.

47. Given the following grammar,

  • S -> CC
  • C -> cC | d
  • Which of the following statements is true?
    The grammar is LR(1) but not LALR(1).
    The grammar is SLR(1) but not LL(1).
    The grammar is LL(1).
    The grammar is LR(1) but not LALR(1).
    We say a grammar is an LL(1) if the associated LL(1) parsing table has at most a single production for each table entry.

    48. During a bottom-up evaluation of a syntax directed definition, inherited attributes can

    always be evaluated.
    Never be evaluated.
    be evaluated only if the definition if L-attributed.
    None of the above.
    During a bottom-up evaluation of a syntax directed definition, inherited attributes can be evaluated only if the definition if L-attributed.

    49. Which among the statements if NOT true?

    LALR is powerful compared to SLR.
    Ambiguous grammar cannot be LL(k) for any k.
    Unambiguous grammar has similar left-most and right-most derivations.
    An LL(1) parser is a top-down parser.
    We say a grammar is ambiguous if there is a string s such that the grammar has more than one left-most derivations for s. We can also come up with multiple right-most derivations for a string to prove the statement but not both right-most and left-most.

    50. We have the following expression grammar,

    E -> E * F | F + E | F
    F -> F - F | id
    + and - have the same precedence.
    * has a higher precedence than +.
    – has a higher precedence than *.
    + has a higher precedence than *.
    – has a higher precedence than *

    51. Which of the checks is includes in static checking?

    Type checks.
    Flow checks.
    All.
    Uniqueness checks.
    All of the above checks are included during static checking.

    52. Which of the statements is FALSE?

    High-level languages can be translated into different intermediate representations.
    CFGs can be used to specify lexical and syntactic rules.
    Type checking is done before parsing.
    Function arguments can be passed to a function using the stack.
    Type checking is done in the syntax analysis phase of compiler design while type checking is done in the semantic analysis phase.

    53. A program P has two source modules M1 and M2 in different files. If M1 has a reference to a function defined in M2, when is this reference resolved?

    Run time.
    Compile time.
    Link time.
    Load time.
    References are resolved at link time by a linker. The linker extracts object modules from libraries and adds them to the executable.

    54. Why are some optimizations carried out in intermediate code?

    Program analysis is more accurate when deon on intermediate code than in machine code.
    So that information from the compiler front-end can be used for optimization.
    So they can enhance compiler portability for other target processors.
    Information from data flow analysis can be used for optimization.
    They allow for portability for other target processors. Intermediate code is machine independent and at the same time close to machine code.

    55. Among the following pairs, which is the easiest to implement and most powerful?

    Canonical LR and LALR.
    LALR and Canonical LR.
    SLR and Canonical LR.
    SLR, LALR.
    SLR is the easiest to implement while Canonical LR is the most powerful.

    Sign up for FREE 3 months of Amazon Music. YOU MUST NOT MISS.