会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 31. 发明授权
    • Method and system for immersing face images into a video sequence
    • 将脸部图像浸入视频序列的方法和系统
    • US07826644B2
    • 2010-11-02
    • US12758172
    • 2010-04-12
    • Rajeev SharmaNamsoon Jung
    • Rajeev SharmaNamsoon Jung
    • G06K9/00
    • G06K9/00744G06K9/00221G11B27/036G11B27/28
    • The present invention is a system and method for immersing facial images of people captured automatically from an image or a sequence of images into a live video playback sequence. This method allows viewers to perceive a participation in the viewed “movie” segment. A format is defined for storing the video such that this live playback of the video sequence is possible. A plurality of Computer Vision algorithms in the invention processes a plurality of input image sequences from the means for capturing images, which is pointed at the users in the vicinity of the system and performs the head detection and tracking. The interaction in the invention can be performed either in real-time or off-line depending on the embodiment of the invention in an uncontrolled background.
    • 本发明是将从图像或图像序列自动捕获的人物的面部图像浸入实况视频播放序列中的系统和方法。 这种方法允许观众感知参与所观看的“电影”片段。 定义了用于存储视频的格式,使得视频序列的实时回放成为可能。 本发明中的多个计算机视觉算法从用于捕获图像的装置处理多个输入图像序列,该装置指向系统附近的用户并执行头部检测和跟踪。 根据本发明的实施例,在不受控制的背景下,本发明中的交互可以实时地或离线执行。
    • 32. 发明授权
    • Method and system for printing of automatically captured facial images augmented with promotional content
    • 用于打印自动拍摄的面部图像的方法和系统增强了宣传内容
    • US07283650B1
    • 2007-10-16
    • US10724302
    • 2003-11-26
    • Rajeev SharmaNamsoon Jung
    • Rajeev SharmaNamsoon Jung
    • G06K9/00
    • G06Q30/02G06Q30/0225G06Q30/0234G06Q30/0235G06Q30/0239G06Q30/0269G06Q30/0277
    • The present invention is a system and method for printing facial images of people, captured automatically from a sequence of images, onto coupons or any promotional printed material, such as postcards, stamps, promotional brochures, or tickets for movies or shows. The coupon can also be used as a means to encourage people to visit specific sites as a way of promoting goods or services sold at the visited site. The invention is named UCOUPON. A plurality of Computer Vision algorithms in the UCOUPON processes a plurality of input image sequences from one or a plurality of means for capturing images that is pointed at the customers in the vicinity of the system in an uncontrolled background. The coupon content is matched by the customer's demographic information, and primarily, the UCOUPON system does not require any customer input or participation to gather the demographic data, operating fully independently and automatically. The embodiment of the UCOUPON system can be integrated into any public place that requires the usage of coupons, such as existing checkout counters of the retail store environment. The UCOUPON can also be integrated into a stand-alone system, such as a coupon Kiosk system.
    • 本发明是一种用于打印人物面部图像的系统和方法,其从图像序列自动捕获到优惠券或任何促销印刷材料,例如明信片,邮票,宣传手册或电影或节目的票。 优惠券也可以作为鼓励人们访问特定网站的一种手段,以此作为促进在访问地点销售的商品或服务的一种方法。 本发明名称为UCOUPON。 UCOUPON中的多个计算机视觉算法处理来自一个或多个装置的多个输入图像序列,用于捕获在不受控制的背景中指向系统附近的客户的图像。 优惠券内容与客户的人口统计信息相匹配,主要是UCOUPON系统不需要任何客户输入或参与来收集人口统计数据,完全独立和自动运行。 UCOUPON系统的实施例可以集成到任何需要使用优惠券的公共场所,例如零售商店环境的现有结帐柜台。 UCOUPON也可以集成到独立系统中,例如优惠券信息亭系统。
    • 33. 发明授权
    • Method and system for real-time facial image enhancement
    • 用于实时面部图像增强的方法和系统
    • US07227976B1
    • 2007-06-05
    • US10609245
    • 2003-06-27
    • Namsoon JungRajeev Sharma
    • Namsoon JungRajeev Sharma
    • G06K9/00G06T15/00A61B3/10
    • G06T11/00G06K9/00228G06T7/20
    • The present invention is a system and method for detecting facial features of humans in a continuous video and superimposing virtual objects onto the features automatically and dynamically in real-time. The suggested system is named Facial Enhancement Technology (FET). The FET system consists of three major modules, initialization module, facial feature detection module, and superimposition module. Each module requires demanding processing time and resources by nature, but the FET system integrates these modules in such a way that real time processing is possible. The users can interact with the system and select the objects on the screen. The superimposed image moves along with the user's random motion dynamically. The FET system enables the user to experience something that was not possible before by augmenting the person's facial images. The hardware of the FET system comprises the continuous image-capturing device, image processing and controlling system, and output display system.
    • 本发明是用于在连续视频中检测人的面部特征并且将虚拟对象实时地自动和动态地叠加到特征上的系统和方法。 建议的系统称为面部增强技术(FET)。 FET系统由三个主要模块,初始化模块,面部特征检测模块和叠加模块组成。 每个模块本质上都需要苛刻的处理时间和资源,但FET系统可以以实时处理的方式集成这些模块。 用户可以与系统进行交互,并在屏幕上选择对象。 叠加的图像随着用户的随机运动动态移动。 FET系统使用户能够通过增强人脸部图像来体验以前不可能的东西。 FET系统的硬件包括连续的图像捕获设备,图像处理和控制系统以及输出显示系统。
    • 35. 发明授权
    • Method and system for determining the impact of crowding on retail performance
    • 确定拥挤对零售业绩的影响的方法和系统
    • US08812344B1
    • 2014-08-19
    • US12459283
    • 2009-06-29
    • Varij SaurabhNamsoon JungRajeev Sharma
    • Varij SaurabhNamsoon JungRajeev Sharma
    • G06Q10/00G06Q30/02
    • G06Q30/0201G06K9/00778
    • The present invention is a system, method, and apparatus for determining the impact of crowding on retail performance based on a measurement for behavior patterns of people in a store area. The present invention captures a plurality of input images of the people by at least a means for capturing images, such as cameras, in the store area. In the captured plurality of input images, each person's shopping path is detected by a video analytics-based tracking algorithm. A subset of the people is identified as a crowd in the store area. In relation to the crowd, the behavior patterns of the target person are measured. After aggregating the measurements for the behavior patterns over a predefined window of time, the present invention can calculate a crowd index and a crowd impact index for the store area based on the measurements. A crowd index shows the level of crowd density in the store area caused by a crowd, including traffic count of the crowd in the store area. A crowd impact index comprises a traffic count of the target people who make trips to the store area and a shopping time index, such as average shopping time changes of the target people, in relation to a crowd in the measured store area.
    • 本发明是一种用于基于对商店区域中的人的行为模式的测量来确定拥挤对零售业绩的影响的系统,方法和装置。 本发明通过至少一种用于在商店区域中捕获诸如照相机之类的图像的装置捕获人的多个输入图像。 在所捕获的多个输入图像中,通过基于视频分析的跟踪算法来检测每个人的购物路径。 人们的一小部分被确定为商店区域的人群。 针对人群,衡量目标人员的行为模式。 在预定时间窗口聚集了行为模式的测量之后,本发明可以基于测量来计算存储区域的人群指数和人群影响指数。 人群指数显示人群造成的店面人群密度,包括店面人群的流量。 人群影响指数包括与测量商店区域中的人群相关的进入商店区域的目标人群的交通计数和目标人群的平均购物时间变化等购物时间指数。
    • 36. 发明授权
    • Method and system for rating of out-of-home digital media network based on automatic measurement
    • 基于自动测量的户外数字媒体网络评级的方法和系统
    • US08660895B1
    • 2014-02-25
    • US11818485
    • 2007-06-14
    • Varij SaurabhJeff HersheySatish MummareddyRajeev SharmaNamsoon Jung
    • Varij SaurabhJeff HersheySatish MummareddyRajeev SharmaNamsoon Jung
    • G06Q30/02
    • G06Q30/0204G06Q30/0242
    • The present invention is a method and system for producing a set of ratings for out-of-home media based on the measurement of behavior patterns and demographics of the people in a digital media network. The present invention captures a plurality of input images of the people in the vicinity of sampled out-of-home media in a digital media network by a plurality of means for capturing images, and tracks each person. The present invention processes the plurality of input images in order to analyze the behavior and demographics of the people. The present invention aggregates the measurements for the behavior patterns and demographics of the people, analyzes the data, and extracts characteristic information based on the estimated parameters from the aggregated measurements. Finally, the present invention calculates a set of ratings based on the characteristic information. The plurality of computer vision technologies can comprise face detection, person tracking, body parts detection, and demographic classification of the people, on the captured visual information of the people in the vicinity of the out-of-home media.
    • 本发明是一种用于根据对数字媒体网络中的人的行为模式和人口统计的测量来生产用于户外媒体的一组评级的方法和系统。 本发明通过用于捕获图像的多个装置捕获数字媒体网络中的采样的室外媒体附近的人的多个输入图像,并且跟踪每个人。 本发明处理多个输入图像以分析人的行为和人口统计。 本发明聚合人们的行为模式和人口统计学的测量结果,分析数据,并根据来自聚合测量的估计参数提取特征信息。 最后,本发明基于特征信息计算一组评级。 多种计算机视觉技术可以包括人脸检测,人物跟踪,身体部位检测和人群的人口统计分类,以及在家外媒体附近的人们所获取的视觉信息。
    • 37. 发明授权
    • Method and system for age estimation based on relative ages of pairwise facial images of people
    • 基于成人面部人脸相对年龄的年龄估计方法与系统
    • US08520906B1
    • 2013-08-27
    • US12283595
    • 2008-09-12
    • Hankyu MoonRajeev SharmaNamsoon Jung
    • Hankyu MoonRajeev SharmaNamsoon Jung
    • G06K9/00G06K9/46G06K9/66G06K9/62
    • G06K9/6292G06K9/6263G06K2009/00322
    • The present invention is a system and method for estimating the age of people based on their facial images. It addresses the difficulty of annotating the age of a person from facial image by utilizing relative age (such as older than, or younger than) and face-based class similarity (gender, ethnicity or appearance-based cluster) of sampled pair-wise facial images. It involves a unique method for the pair-wise face training and a learning machine (or multiple learning machines) which output the relative age along with the face-based class similarity, of the pairwise facial images. At the testing stage, the given input face image is paired with some number of reference images to be fed to the trained machines. The age of the input face is determined by comparing the estimated relative ages of the pairwise facial images to the ages of reference face images. Because age comparison is more meaningful when the pair belongs to the same demographics category (such as gender and ethnicity) or when the pair has similar appearance, the estimated relative ages are weighted according to the face-based class similarity score between the reference face and the input face.
    • 本发明是一种基于面部图像来估计人的年龄的系统和方法。 它解决了通过利用抽样对面面部的相对年龄(如年龄大于或小于)和基于脸部类别(性别,种族或出现的群集)来标注个人年龄的难度 图片。 它涉及成对面部训练的独特方法和一种学习机器(或多个学习机器),其输出成对面部图像的面部类似度的相对年龄。 在测试阶段,给定的输入面部图像与一些数量的参考图像配对,以供给训练有素的机器。 通过将成对面部图像的估计相对年龄与参考面部图像的年龄进行比较来确定输入面的年龄。 因为年龄比较更有意义,当该对属于相同的人口统计学类别(如性别和种族)时,或者对具有相似的外观时,估计的相对年龄会根据参考面和 输入面。
    • 38. 发明授权
    • Videore: method and system for storing videos from multiple cameras for behavior re-mining
    • Videore:用于存储来自多个摄像机的视频以进行行为重新采矿的方法和系统
    • US08457466B1
    • 2013-06-04
    • US12286138
    • 2008-09-29
    • Rajeev SharmaNamsoon Jung
    • Rajeev SharmaNamsoon Jung
    • H04N5/772
    • H04N7/181G06F17/30784H04N5/247
    • The present invention is a method and system for storing videos by track sequences and selection of video segments in a manner to support “re-mining” by indexing and playback of individual visitors' entire trip to an area covered by overlapping cameras, allowing analysis and recognition of detailed behavior. The present invention captures video streams of the people in the area by multiple cameras and tracks the people in each of the video streams, producing track sequences in each video stream. Using the track sequences, the present invention finds trip information of the people. The present invention determines a first set of video segments that contain the trip information of the people, and compacts each of the video streams by removing a second set of video segments that do not contain the trip information of the people from each of the video streams. The video segments in the first set of video segments are associated with the people by indexing the video segments per person based on the trip information. The final storage format of the videos is a trip-centered format which sequences videos from across multiple cameras in a manner to facilitate multiple applications dealing with behavior analysis, and it is an efficient compact format without losing any video segments that contain the track sequences of the people.
    • 本发明是一种通过轨道序列存储视频和视频片段的选择的方法和系统,以支持通过索引和回放个体访客整个旅行到重叠相机覆盖的区域来支持“重新挖掘”的方式,允许分析和 承认详细行为。 本发明通过多个摄像机捕获该区域中的人的视频流,并且跟踪每个视频流中的人,并产生每个视频流中的轨道序列。 使用轨道序列,本发明找到人的行程信息。 本发明确定包含人员的行程信息的第一组视频段,并且通过从每个视频流中移除不包含人员的行程信息的第二组视频段来压缩每个视频流 。 第一组视频片段中的视频段通过根据旅行信息索引每个人的视频片段与人相关联。 视频的最终存储格式是一种以行程为中心的格式,以一种方式对来自多个摄像机的视频进行排序,以便于处理行为分析的多个应用程序,并且它是一种高效的紧凑格式,而不会丢失包含轨迹序列的任何视频段 人民。
    • 39. 发明授权
    • Method and system for measuring emotional and attentional response to dynamic digital media content
    • 用于衡量对动态数字媒体内容的情感和注意反应的方法和系统
    • US08401248B1
    • 2013-03-19
    • US12317917
    • 2008-12-30
    • Hankyu MoonRajeev SharmaNamsoon Jung
    • Hankyu MoonRajeev SharmaNamsoon Jung
    • G06K9/00
    • G06K9/00302G06K9/00315G06K9/00597G06Q30/0242
    • The present invention is a method and system to provide an automatic measurement of people's responses to dynamic digital media, based on changes in their facial expressions and attention to specific content. First, the method detects and tracks faces from the audience. It then localizes each of the faces and facial features to extract emotion-sensitive features of the face by applying emotion-sensitive feature filters, to determine the facial muscle actions of the face based on the extracted emotion-sensitive features. The changes in facial muscle actions are then converted to the changes in affective state, called an emotion trajectory. On the other hand, the method also estimates eye gaze based on extracted eye images and three-dimensional facial pose of the face based on localized facial images. The gaze direction of the person, is estimated based on the estimated eye gaze and the three-dimensional facial pose of the person. The gaze target on the media display is then estimated based on the estimated gaze direction and the position of the person. Finally, the response of the person to the dynamic digital media content is determined by analyzing the emotion trajectory in relation to the time and screen positions of the specific digital media sub-content that the person is watching.
    • 本发明是一种方法和系统,其基于他们的面部表情的变化和对特定内容的关注,提供人们对动态数字媒体的响应的自动测量。 首先,该方法检测和跟踪观众的面孔。 然后通过应用情感敏感的特征过滤器,将每个面部和面部特征进行本地化,以提取面部的情感敏感特征,以基于提取的情感敏感特征来确定面部的面部肌肉动作。 然后将面部肌肉动作的变化转化为情感状态的变化,称为情绪轨迹。 另一方面,该方法还基于基于局部面部图像的提取的眼睛图像和面部的三维面部姿态来估计眼睛凝视。 人的视线方向是根据估计的眼睛凝视度和人的三维面部姿态来估计的。 然后基于估计的注视方向和人的位置来估计媒体显示器上的目标目标。 最后,通过分析与该人正在观看的特定数字媒体子内容的时间和屏幕位置相关的情绪轨迹来确定该人对动态数字媒体内容的响应。
    • 40. 发明授权
    • Automatic detection and aggregation of demographics and behavior of people
    • 人口统计学和人际行为的自动检测和聚合
    • US08351647B2
    • 2013-01-08
    • US12002398
    • 2007-12-17
    • Rajeev SharmaHankyu MoonNamsoon Jung
    • Rajeev SharmaHankyu MoonNamsoon Jung
    • G06K9/00G06Q30/00
    • G06Q30/02
    • The present invention is a system and framework for automatically measuring and correlating visual characteristics of people and accumulating the data for the purpose of demographic and behavior analysis. The demographic and behavior characteristics of people are extracted from a sequence of images using techniques from computer vision. The demographic and behavior characteristics are combined with a timestamp and a location marker to provide a feature vector of a person at a particular time at a particular location. These feature vectors are then accumulated and aggregated automatically in order to generate a data set that can be statistically analyzed, data mined and/or queried.
    • 本发明是用于自动测量和关联人的视觉特征并且为了人口和行为分析的目的而累积数据的系统和框架。 使用计算机视觉技术从图像序列中提取人的人口和行为特征。 人口统计学和行为特征与时间戳和位置标记组合,以在特定位置处的特定时间提供人的特征向量。 然后,这些特征向量自动累积和聚合,以便生成可以进行统计分析,数据挖掘和/或查询的数据集。