会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 32. 发明授权
    • Method and system for automated annotation of persons in video content
    • 视频内容中用户自动注释的方法和系统
    • US08213689B2
    • 2012-07-03
    • US12172939
    • 2008-07-14
    • Jay YagnikMing Zhao
    • Jay YagnikMing Zhao
    • G06K9/00G06K9/62
    • G06K9/00711G06F17/30781G06K9/00295G06K9/6255
    • Methods and systems for automated annotation of persons in video content are disclosed. In one embodiment, a method of identifying faces in a video includes the stages of: generating face tracks from input video streams; selecting key face images for each face track; clustering the face tracks to generate face clusters; creating face models from the face clusters; and correlating face models with a face model database. In another embodiment, a system for identifying faces in a video includes a face model database having face entries with face models and corresponding names, and a video face identifier module. In yet another embodiment, the system for identifying faces in a video can also have a face model generator.
    • 公开了用于视频内容中的人的自动注释的方法和系统。 在一个实施例中,识别视频中的面部的方法包括以下阶段:从输入视频流生成面部曲面; 为每个脸部轨迹选择关键脸部图像; 聚集脸部轨迹以生成脸部群集; 从脸部群集中创建面部模型; 并将面部模型与面部模型数据库相关联。 在另一个实施例中,用于识别视频中的面部的系统包括具有面部表情和面部模型和对应名称的面部模型数据库,以及视频面部识别器模块。 在另一个实施例中,用于识别视频中的面部的系统还可以具有面部模型生成器。
    • 33. 发明授权
    • Supervised learning using multi-scale features from time series events and scale space decompositions
    • 使用时间序列事件和尺度空间分解的多尺度特征进行监督学习
    • US08140451B1
    • 2012-03-20
    • US13183375
    • 2011-07-14
    • Ullas GargiJay Yagnik
    • Ullas GargiJay Yagnik
    • G06F11/00
    • G06K9/00536G06K9/00516
    • Disclosed herein is a method, a system and a computer program product for generating a statistical classification model used by a computer system to determine a class associated with an unlabeled time series event. Initially, a set of labeled time series events is received. A set of time series features is identified for a selected set of the labeled time series events. A plurality of scale space decompositions is generated based on the set of time series features. A plurality of multi-scale features is generated based on the plurality of scale space decompositions. A first subset of the plurality of multi-scale features that correspond at least in part to a subset of space or time points within a time series event that contain feature data that distinguish the time series event as belonging to a class of time series events that corresponds to the class label are identified. A statistical classification model for classifying an unlabeled time series event based on the class corresponding with the class label is generated based at least in part on the at the first subset of the plurality of multi-scale features.
    • 本文公开了一种用于生成由计算机系统用于确定与未标记的时间序列事件相关联的类别的统计分类模型的方法,系统和计算机程序产品。 最初,接收一组标记的时间序列事件。 针对所选择的一组标记的时间序列事件识别一组时间序列特征。 基于该组时间序列特征生成多个比例空间分解。 基于多个刻度空间分解产生多个多尺度特征。 所述多个多尺度特征的第一子集至少部分对应于时间序列事件内的空间或时间点的子集,所述时间序列事件包含将所述时间序列事件区分为属于一类时间序列事件的特征数据,所述时间序列事件 对应于类标签被识别。 至少部分地基于多个多尺度特征的第一子集来生成用于基于与类标签相对应的类来分类未标记时间序列事件的统计分类模型。
    • 34. 发明授权
    • Three-dimensional wavelet based video fingerprinting
    • 基于三维小波的视频指纹识别
    • US08094872B1
    • 2012-01-10
    • US11746339
    • 2007-05-09
    • Jay YagnikHenry A. RowleySergey Ioffe
    • Jay YagnikHenry A. RowleySergey Ioffe
    • G06K9/00G06K9/46H04L9/32H04N7/167
    • G06K9/00711H04N21/23418
    • A method and system generates and compares fingerprints for videos in a video library. The video fingerprints provide a compact representation of the spatial and sequential characteristics of the video that can be used to quickly and efficiently identify video content. Because the fingerprints are based on spatial and sequential characteristics rather than exact bit sequences, visual content of videos can be effectively compared even when there are small differences between the videos in compression factors, source resolutions, start and stop times, frame rates, and so on. Comparison of video fingerprints can be used, for example, to search for and remove copyright protected videos from a video library. Further, duplicate videos can be detected and discarded in order to preserve storage space.
    • 方法和系统生成并比较视频库中视频的指纹。 视频指纹提供了可用于快速有效地识别视频内容的视频的空间和顺序特征的紧凑表示。 因为指纹是基于空间和顺序特征而不是精确的比特序列,所以即使在压缩因素,源分辨率,开始和停止时间,帧率等之间的视频之间存在小的差异,也可以有效地比较视频的视觉内容 上。 可以使用比较视频指纹,例如,从视频库搜索和删除受版权保护的视频。 此外,为了保存存储空间,可以检测和丢弃重复的视频。
    • 35. 发明授权
    • Supervised learning using multi-scale features from time series events and scale space decompositions
    • 使用时间序列事件和尺度空间分解的多尺度特征进行监督学习
    • US08001062B1
    • 2011-08-16
    • US11952436
    • 2007-12-07
    • Ullas GargiJay Yagnik
    • Ullas GargiJay Yagnik
    • G06F11/00
    • G06K9/00536G06K9/00516
    • Disclosed herein is a method, a system and a computer program product for generating a statistical classification model used by a computer system to determine a class associated with an unlabeled time series event. Initially, a set of labeled time series events is received. A set of time series features is identified for a selected set of the labeled time series events. A plurality of scale space decompositions is generated based on the set of time series features. A plurality of multi-scale features is generated based on the plurality of scale space decompositions. A first subset of the plurality of multi-scale features that correspond at least in part to a subset of space or time points within a time series event that contain feature data that distinguish the time series event as belonging to a class of time series events that corresponds to the class label are identified. A statistical classification model for classifying an unlabeled time series event based on the class corresponding with the class label is generated based at least in part on the at the first subset of the plurality of multi-scale features.
    • 本文公开了一种用于生成由计算机系统用于确定与未标记的时间序列事件相关联的类别的统计分类模型的方法,系统和计算机程序产品。 最初,接收一组标记的时间序列事件。 针对所选择的一组标记的时间序列事件识别一组时间序列特征。 基于该组时间序列特征生成多个比例空间分解。 基于多个刻度空间分解产生多个多尺度特征。 所述多个多尺度特征的第一子集至少部分对应于时间序列事件内的空间或时间点的子集,所述时间序列事件包含将所述时间序列事件区分为属于一类时间序列事件的特征数据,所述时间序列事件 对应于类标签被识别。 至少部分地基于多个多尺度特征的第一子集来生成用于基于与类标签相对应的类来分类未标记时间序列事件的统计分类模型。
    • 39. 发明授权
    • Correlation-based method for representing long-timescale structure in time-series data
    • 用于表示时间序列数据中的长时间尺度结构的相关方法
    • US09367612B1
    • 2016-06-14
    • US13300057
    • 2011-11-18
    • Douglas EckJay Yagnik
    • Douglas EckJay Yagnik
    • G06F17/00G06F17/30
    • G06F17/30743G06F17/30551G06F17/3074
    • A system identifies a set of initial segments of a time-based data item, such as audio. The segments can be defined at regular time intervals within the time-based data item. The initial segments are short segments. The system computes a short-timescale vectorial representation for each initial segment and compares the short-timescale vectorial representation for each initial segment with other short-timescale vectorial representations of the segments in a time duration within the time-based data item (e.g., audio) immediately preceding or immediately following the initial segment. The system generates a representation of long-timescale information for the time-based data item based on a comparison of the short-timescale vectorial representations of the initial segments and the short-timescale vectorial representations of immediate segments. The representation of long-timescale information identifies an underlying repetition structure of the time-based data item, such as rhythm or phrasing in an audio item.
    • 系统识别基于时间的数据项(例如音频)的一组初始段。 可以在基于时间的数据项中以规则的时间间隔定义段。 初始段是短段。 该系统计算每个初始段的短时刻矢量表示,并将每个初始段的短时刻矢量表示与时间数据项内的时间段内的段的其他短时刻矢量表示进行比较(例如,音频 )紧接在初始段之前或之后。 基于初始段的短时刻矢量表示与即时段的短时刻矢量表示的比较,该系统生成用于基于时间的数据项的长时间尺度信息的表示。 长时间尺度信息的表示识别基于时间的数据项的基础重复结构,例如音频项目中的节奏或短语。
    • 40. 发明授权
    • Predicting engagement in video content
    • 预测参与视频内容
    • US08959540B1
    • 2015-02-17
    • US12783524
    • 2010-05-19
    • Ullas GargiJay YagnikAnindya Sarkar
    • Ullas GargiJay YagnikAnindya Sarkar
    • H04N7/16H04H60/32
    • H04N21/23418H04H20/93H04H60/31H04H60/46H04H60/59H04H60/63H04N21/237H04N21/251H04N21/25891H04N21/44008H04N21/44222H04N21/466H04N21/4826H04N21/6582
    • User engagement in unwatched videos is predicted by collecting and aggregating data describing user engagement with watched videos. The data are normalized to reduce the influence of factors other than the content of the videos on user engagement. Engagement metrics are calculated for segments of watched videos that indicate user engagement with each segment relative to overall user engagement with the watched videos. Features of the watched videos within time windows are characterized, and a function is learned that relates the features of the videos within the time windows to the engagement metrics for the time windows. The features of a time window of an unwatched video are characterized, and the learned function is applied to the features to predict user engagement to the time window of the unwatched video. The unwatched video can be enhanced based on the predicted user engagement.
    • 通过收集和汇总描述与观看视频的用户互动的数据来预测用户对未拍摄的视频的参与。 数据被归一化以减少视频内容以外的因素对用户参与的影响。 针对所观看视频的细分,计算参与指标,指示用户与每个细分受众群相对于与观看视频的整体用户互动度的参与度。 在时间窗口中观看视频的特征被描述,并且学习了将时间窗口内的视频的特征与时间窗口的参与度量相关联的功能。 对未拍摄视频的时间窗口的特征进行了表征,并且将学习的功能应用于特征以预测用户对未拍摄视频的时间窗口的参与。 可以根据预测的用户参与来增强未拍摄的视频。