改变/减少Happy中的命题逻辑解析器中的冲突



我在happy上做一个简单的命题逻辑解析器基于这个命题逻辑语法的BNF定义,这是我的代码

{
module FNC where
import Data.Char 
import System.IO
}
-- Parser name, token types and error function name:
--
%name parse Prop
%tokentype { Token } 
%error { parseError }
-- Token list:
%token
     var { TokenVar $$ }  -- alphabetic identifier
     or { TokenOr }
     and { TokenAnd }
     '¬' { TokenNot }
     "=>" { TokenImp } -- Implication
     "<=>" { TokenDImp } --double implication
    '(' { TokenOB } --open bracket
    ')'  { TokenCB } --closing bracket
    '.' {TokenEnd}
%left "<=>"
%left "=>"
%left or
%left and
%left '¬'
%left '(' ')'
%%
--Grammar
Prop :: {Sentence}
Prop : Sentence '.' {$1}
Sentence :: {Sentence}
Sentence : AtomSent {Atom $1}
    | CompSent {Comp $1}
AtomSent :: {AtomSent}
AtomSent : var { Variable $1 }
CompSent :: {CompSent}
CompSent : '(' Sentence ')' { Bracket $2 }
    | Sentence Connective Sentence {Bin $2 $1 $3}
    | '¬' Sentence {Not $2}
Connective :: {Connective}
Connective : and {And}
    | or {Or}
    | "=>" {Imp}
    | "<=>" {DImp}

{
--Error function
parseError :: [Token] -> a
parseError _ = error ("parseError: Syntax analysis error.n")
--Data types to represent the grammar
data Sentence
    = Atom AtomSent
    | Comp CompSent
    deriving Show
data AtomSent = Variable String deriving Show
data CompSent
      = Bin Connective Sentence Sentence
      | Not Sentence
      | Bracket Sentence
      deriving Show
data Connective
    = And
    | Or
    | Imp
    | DImp
    deriving Show
--Data types for the tokens
data Token
      = TokenVar String
      | TokenOr
      | TokenAnd
      | TokenNot
      | TokenImp
      | TokenDImp
      | TokenOB
      | TokenCB
      | TokenEnd
      deriving Show
--Lexer
lexer :: String -> [Token]
lexer [] = []  -- cadena vacia
lexer (c:cs)   -- cadena es un caracter, c, seguido de caracteres, cs.
      | isSpace c = lexer cs
      | isAlpha c = lexVar (c:cs)
      | isSymbol c = lexSym (c:cs)
      | c== '(' = TokenOB : lexer cs
      | c== ')' = TokenCB : lexer cs
      | c== '¬' = TokenNot : lexer cs --solved
      | c== '.'  = [TokenEnd]
      | otherwise = error "lexer:  Token invalido"
lexVar cs =
   case span isAlpha cs of
      ("or",rest) -> TokenOr : lexer rest
      ("and",rest)  -> TokenAnd : lexer rest
      (var,rest)   -> TokenVar var : lexer rest
lexSym cs =
    case span isSymbol cs of
        ("=>",rest) -> TokenImp : lexer rest
        ("<=>",rest) -> TokenDImp : lexer rest
}

现在,我有两个问题

  1. 出于某种原因,我得到4移位/减少冲突,我真的不知道它们可能在哪里,因为我认为优先级会解决它们(我认为我正确地遵循了BNF语法)…
  2. (这是一个相当Haskell问题)在我的lexer函数上,出于某种原因,我在我说如何处理'¬'的行上得到解析错误,如果我删除那行它的作品,为什么会这样?(此问题已解决)

如果您使用happy with -i,它将生成一个信息文件。该文件列出了解析器的所有状态。它还将列出每个状态的所有可能的转换。您可以使用这些信息来确定shift/reduce冲突是否是您关心的冲突。

关于调用happy和conflicts的信息:

  • http://www.haskell.org/happy/doc/html/sec-invoking.html
  • http://www.haskell.org/happy/doc/html/sec-conflict-tips.html

下面是-i的部分输出。除了17州,我都搬走了。您需要获得该文件的副本,以便能够正确地调试问题。你在这里看到的只是帮助谈论它:

-----------------------------------------------------------------------------
Info file generated by Happy Version 1.18.10 from FNC.y
-----------------------------------------------------------------------------
state 17 contains 4 shift/reduce conflicts.
-----------------------------------------------------------------------------
Grammar
-----------------------------------------------------------------------------
    %start_parse -> Prop                               (0)
    Prop -> Sentence '.'                               (1)
    Sentence -> AtomSent                               (2)
    Sentence -> CompSent                               (3)
    AtomSent -> var                                    (4)
    CompSent -> '(' Sentence ')'                       (5)
    CompSent -> Sentence Connective Sentence           (6)
    CompSent -> '¬' Sentence                          (7)
    Connective -> and                                  (8)
    Connective -> or                                   (9)
    Connective -> "=>"                                 (10)
    Connective -> "<=>"                                (11)
-----------------------------------------------------------------------------
Terminals
-----------------------------------------------------------------------------
    var            { TokenVar $$ }
    or             { TokenOr }
    and            { TokenAnd }
    '¬'           { TokenNot }
    "=>"           { TokenImp }
    "<=>"          { TokenDImp }
    '('            { TokenOB }
    ')'            { TokenCB }
    '.'            { TokenEnd }
-----------------------------------------------------------------------------
Non-terminals
-----------------------------------------------------------------------------
    %start_parse    rule  0
    Prop            rule  1
    Sentence        rules 2, 3
    AtomSent        rule  4
    CompSent        rules 5, 6, 7
    Connective      rules 8, 9, 10, 11
-----------------------------------------------------------------------------
States
-----------------------------------------------------------------------------
State 17
    CompSent -> Sentence . Connective Sentence          (rule 6)
    CompSent -> Sentence Connective Sentence .          (rule 6)
    or             shift, and enter state 12
            (reduce using rule 6)
    and            shift, and enter state 13
            (reduce using rule 6)
    "=>"           shift, and enter state 14
            (reduce using rule 6)
    "<=>"          shift, and enter state 15
            (reduce using rule 6)
    ')'            reduce using rule 6
    '.'            reduce using rule 6
    Connective     goto state 11
-----------------------------------------------------------------------------
Grammar Totals
-----------------------------------------------------------------------------
Number of rules: 12
Number of terminals: 9
Number of non-terminals: 6
Number of states: 19

这个输出基本上说它在看连接词的时候遇到了一点模糊。结果,你所链接的幻灯片提到了这一点(幻灯片11),"模糊性是通过优先∧∨⇒⇔或括号来解决的"。

在这一点上,我建议查看shift/reduce冲突和您想要的优先级,以查看您拥有的解析器是否会做正确的事情。如果是这样,那么您可以放心地忽略这些警告。如果没有,你有更多的工作要做。

我可以回答第二个问题:

| c== '¬' == TokenNot : lexer cs --problem here
--        ^^

应该是=的地方现在是==

相关内容

  • 没有找到相关文章

最新更新