File size: 2,479 Bytes
5f7f9c2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
# nlp_pipeline.py
from transformers import pipeline, AutoTokenizer, AutoModelForTokenClassification
from sentence_transformers import SentenceTransformer
import numpy as np

# Load lighter/CPU-friendly models for HF Space
SUMMARIZER = pipeline("summarization", model="sshleifer/distilbart-cnn-12-6", device=-1)
# NER model (token-classification)
NER = pipeline("ner", model="dbmdz/bert-large-cased-finetuned-conll03-english", aggregation_strategy="simple", device=-1)
EMBED_MODEL = SentenceTransformer("sentence-transformers/all-MiniLM-L6-v2")  # small & fast

def summarize(text, max_length=120):
    # chunk if needed
    if len(text) < 800:
        s = SUMMARIZER(text, max_length=max_length, min_length=40, do_sample=False)
        return s[0]["summary_text"]
    # naive chunking
    parts = []
    chunk_size = 700
    for i in range(0, len(text), chunk_size):
        chunk = text[i:i+chunk_size]
        parts.append(SUMMARIZER(chunk, max_length=60, min_length=20)[0]["summary_text"])
    return " ".join(parts)

def extract_entities(text):
    ner = NER(text)
    # ner returns list of {'entity_group','score','word'}
    grouped = {}
    for ent in ner:
        key = ent.get("entity_group") or ent.get("entity")
        grouped.setdefault(key, []).append({"text": ent["word"], "score": float(ent["score"])})
    return grouped

def embed_text(text):
    return EMBED_MODEL.encode(text, convert_to_numpy=True, normalize_embeddings=True)

def get_sentence_provenance(sentences, entities):
    # map entity text to sentences that contain it (case-insensitive)
    prov = {}
    for t in entities:
        prov[t] = []
        for s in sentences:
            if t.lower() in s.lower():
                prov[t].append(s)
    return prov

def process_document(doc):
    text = doc["text"]
    summary = summarize(text)
    entities_grouped = extract_entities(text)
    # flatten entity strings (unique)
    entity_texts = set()
    for k, v in entities_grouped.items():
        for item in v:
            entity_texts.add(item["text"])
    provenance = get_sentence_provenance(doc["sentences"], entity_texts)
    embedding = embed_text(summary)  # index the summary embedding for compactness
    tags = []  # optional: simple tag by most frequent NER labels
    return {
        "summary": summary,
        "entities": entities_grouped,
        "entity_texts": list(entity_texts),
        "provenance": provenance,
        "embedding": embedding,
        "tags": tags
    }