Changeset - 9a773d2f022b
[Not reviewed]
stable
0 1 0
FUJIWARA Katsunori - 9 years ago 2017-01-22 18:17:38
foozy@lares.dti.ne.jp
Grafted from: 73e3599971da
journal: make "repository:" filtering condition work as expected (Issue #261)

Before this revision, journal filtering conditions like as below never
match against any entry, even if there are corresponded repositories.

- repository:foo/bar
- repository:foo-bar

Whoosh library, which is used to parse filtering condition, does:

- treat almost all non-alphanumeric characters as delimiter at
parsing condition
- join each conditions at filtering by "AND", by default

For example, filtering condition "repository:foo/bar" is translated as
"repository:foo AND repository:bar". This combined condition never
matches against any entry, because it is impossible that "repository"
field in DBMS table "user_logs" has both "foo" and "bar" values at
same time.

Using TEXT for "repository" of JOURNAL_SCHEMA causes this issue,
because TEXT assumes tokenization at parsing.

In addition to it, using TEXT also causes unintentional ignorance of
"stop words" in filtering conditions. For example, "this", "a", "you",
and so on are ignored at parsing, because these are too generic words
(from point of view of generic "text search").

To make "repository:" filtering condition work as expected, this
revision uses ID instead of TEST for "repository" of
JOURNAL_COLUMN. ID avoids both tokenization and removing "stop words".

This replacement should be safe with already existing DBMS instance,
because:

- JOURNAL_SCHEMA is used only to parse filtering condition
- DBMS table "user_logs" itself is defined by UserLog class
(in kallithea/model/db.py)

BTW, using ID also avoids normalization by lowercase-ing. But this
doesn't violate current case-insensitive search policy, because
LOWER-ing in actual SQL query is achieved by get_filterion() or so in
kallithea/controllers/admin/admin.py.
1 file changed with 1 insertions and 1 deletions:
0 comments (0 inline, 0 general)
kallithea/lib/indexers/__init__.py
Show inline comments
 
@@ -37,97 +37,97 @@ from whoosh.analysis import RegexTokeniz
 
from whoosh.fields import TEXT, ID, STORED, NUMERIC, BOOLEAN, Schema, FieldType, DATETIME
 
from whoosh.formats import Characters
 
from whoosh.highlight import highlight as whoosh_highlight, HtmlFormatter, ContextFragmenter
 
from kallithea.lib.utils2 import LazyProperty
 

	
 
log = logging.getLogger(__name__)
 

	
 
# CUSTOM ANALYZER wordsplit + lowercase filter
 
ANALYZER = RegexTokenizer(expression=r"\w+") | LowercaseFilter()
 

	
 
#INDEX SCHEMA DEFINITION
 
SCHEMA = Schema(
 
    fileid=ID(unique=True),
 
    owner=TEXT(),
 
    repository=TEXT(stored=True),
 
    path=TEXT(stored=True),
 
    content=FieldType(format=Characters(), analyzer=ANALYZER,
 
                      scorable=True, stored=True),
 
    modtime=STORED(),
 
    extension=TEXT(stored=True)
 
)
 

	
 
IDX_NAME = 'HG_INDEX'
 
FORMATTER = HtmlFormatter('span', between='\n<span class="break">...</span>\n')
 
FRAGMENTER = ContextFragmenter(200)
 

	
 
CHGSETS_SCHEMA = Schema(
 
    raw_id=ID(unique=True, stored=True),
 
    date=NUMERIC(stored=True),
 
    last=BOOLEAN(),
 
    owner=TEXT(),
 
    repository=ID(unique=True, stored=True),
 
    author=TEXT(stored=True),
 
    message=FieldType(format=Characters(), analyzer=ANALYZER,
 
                      scorable=True, stored=True),
 
    parents=TEXT(),
 
    added=TEXT(),
 
    removed=TEXT(),
 
    changed=TEXT(),
 
)
 

	
 
CHGSET_IDX_NAME = 'CHGSET_INDEX'
 

	
 
# used only to generate queries in journal
 
JOURNAL_SCHEMA = Schema(
 
    username=TEXT(),
 
    date=DATETIME(),
 
    action=TEXT(),
 
    repository=TEXT(),
 
    repository=ID(),
 
    ip=TEXT(),
 
)
 

	
 

	
 
class WhooshResultWrapper(object):
 
    def __init__(self, search_type, searcher, matcher, highlight_items,
 
                 repo_location):
 
        self.search_type = search_type
 
        self.searcher = searcher
 
        self.matcher = matcher
 
        self.highlight_items = highlight_items
 
        self.fragment_size = 200
 
        self.repo_location = repo_location
 

	
 
    @LazyProperty
 
    def doc_ids(self):
 
        docs_id = []
 
        while self.matcher.is_active():
 
            docnum = self.matcher.id()
 
            chunks = [offsets for offsets in self.get_chunks()]
 
            docs_id.append([docnum, chunks])
 
            self.matcher.next()
 
        return docs_id
 

	
 
    def __str__(self):
 
        return '<%s at %s>' % (self.__class__.__name__, len(self.doc_ids))
 

	
 
    def __repr__(self):
 
        return self.__str__()
 

	
 
    def __len__(self):
 
        return len(self.doc_ids)
 

	
 
    def __iter__(self):
 
        """
 
        Allows Iteration over results,and lazy generate content
 

	
 
        *Requires* implementation of ``__getitem__`` method.
 
        """
 
        for docid in self.doc_ids:
 
            yield self.get_full_content(docid)
 

	
 
    def __getitem__(self, key):
 
        """
 
        Slicing of resultWrapper
 
        """
 
        i, j = key.start, key.stop
 

	
0 comments (0 inline, 0 general)