会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 39. 发明专利
    • Generating a topic-based summary of textual content
    • GB2573189A
    • 2019-10-30
    • GB201901522
    • 2019-02-04
    • ADOBE INC
    • KUNDAN KRISHNABALAJI VASAN SRINIVASAN
    • G06F16/34G06F40/00
    • A method for generating a summary of textual content tuned to a specific topic involves using a topic-aware encoding model to encode 804 the text using a topic label (e.g. a one-hot vector) to generate topic-aware encoded text. A word generation model selects a next word 808 for the summary from the encoded text. The word generation model is trained using machine learning and training data including documents with corresponding summaries, each having an associated topic. The selected next word is provided as feedback 810 to the word generation model. Also disclosed is a method for training the encoding word generation models by obtaining an intermediate dataset comprising documents, and a summary of each document with an associated topic. Training data is generated by merging the text of first and second documents to make a new document which is associated with the summary and topic of the first document, merging their text again and associating the resulting new document with the summary and topic of the second document. The original documents are discarded. This process is repeated until the intermediate training set is exhausted. The training data is then used to train the encoding and word generation models.
    • 40. 发明专利
    • Abstractive summarization of long documents using deep learning
    • GB2571811A
    • 2019-09-11
    • GB201819509
    • 2018-11-30
    • ADOBE INC
    • ARMAN COHANWALTER-WEI-TUH CHANGTRUNG HUU BUIFRANCK DEMONCOURTDOO SOON KIM
    • G06F16/34
    • Disclosed is an abstractive summarization method for summarizing documents, including long documents. The method for generating a summarization of a structured document having a plurality of sections comprises: processing each word in a plurality of words in a section using a respective first recurrent neural network to generate a word-level representation; processing each word-level representation by a second recurrent neural network to generate a section-level representation; generating a context vector by performing a neural attention process on one or more hidden states of said first recurrent neural network and one or more hidden stats of said second recurrent neural network; and then generating a next predicted word in said summarization based upon a previously predicted work and said context vector. The method is used for example to generate a document S from scratch by using words and phrases that are not exactly from the original document D. This robust abstractive summarization is a generally more natural way to summarize a document as humans also tend to write abstractive summaries in their own words rather than copying exact sentences from a source document. This method is particularly useful for summarizing long complex documents.