会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 3. 发明授权
    • Object tracking with integrated motion-based object detection (MogS) and enhanced kalman-type filtering
    • 具有集成的基于运动的对象检测(MogS)和增强型卡尔曼型滤波的对象跟踪
    • US09552648B1
    • 2017-01-24
    • US14066600
    • 2013-10-29
    • HRL Laboratories, LLC
    • Lei ZhangDeepak KhoslaYang Chen
    • G06T7/20
    • G06T7/277G06K9/00771G06K9/3241G06T7/254
    • Described is a system for object tracking with integrated motion-based object detection and enhanced Kalman-type filtering. The system detects a location of a moving object in an image frame using an object detection MogS module, thereby generating an object detection. For each image frame in a sequence of image frames, the system predicts the location of the moving object in the next image frame using a Kalman filter prediction module to generate a predicted object location. The predicted object location is refined using a Kalman filter updating module, and the Kalman filter updating module is controlled by a controller module that monitors a similarity between the predicted object location and the moving object's location in a previous image frame. Finally, a set of detected moving object locations in the sequence of image frames is output.
    • 描述了一种用于跟踪集成的基于运动的物体检测和增强型卡尔曼滤波的系统。 该系统使用物体检测MogS模块来检测运动物体在图像帧中的位置,从而产生物体检测。 对于图像帧序列中的每个图像帧,系统使用卡尔曼滤波器预测模块预测运动对象在下一图像帧中的位置,以生成预测对象位置。 使用卡尔曼滤波器更新模块来改进预测对象位置,并且卡尔曼滤波器更新模块由控制器模块来控制,该控制器模块监视预先对象位置与先前图像帧中的移动物体的位置之间的相似度。 最后,输出图像帧序列中的一组检测到的移动物体位置。
    • 4. 发明授权
    • Bio-inspired method of ground object cueing in airborne motion imagery
    • 空中运动图像中地面物体提示的生物灵感方法
    • US09008366B1
    • 2015-04-14
    • US13938196
    • 2013-07-09
    • HRL Laboratories, LLC
    • Kyungnam KimChangsoo S. JeongDeepak KhoslaYang ChenShinko Y. ChengAlexander L. HondaLei Zhang
    • G06K9/00G06K9/62
    • G06K9/6202G06T7/246G06T2207/10016G06T2207/10032G06T2207/30196G06T2207/30232G06T2207/30241
    • Described is method for object cueing in motion imagery. Key points and features are extracted from motion imagery, and features between consecutive image frames of the motion imagery are compared to identify similar image frames. A candidate set of matching keypoints is generated by matching keypoints between the similar image frames. A ground plane homography model that fits the candidate set of matching keypoints is determined to generate a set of correct matching keypoints. Each image frame of a set of image frames within a selected time window is registered into a reference frame's coordinate system using the homography transformation. A difference image is obtained between the reference frame and each registered image frame, resulting in multiple difference images. The difference images are then accumulated to calculate a detection image which is used for detection of salient regions. Object cues for surveillance use are produced based on the detected salient regions.
    • 描述了运动图像中对象提示的方法。 从运动图像中提取关键点和特征,并比较运动图像的连续图像帧之间的特征,以识别相似的图像帧。 通过匹配相似图像帧之间的关键点来生成候选的匹配关键点集合。 确定适合匹配关键点候选组的地平面单应性模型,以生成一组正确的匹配关键点。 在所选择的时间窗口内的一组图像帧的每个图像帧使用单变图变换登记到参考系的坐标系中。 在参考帧和每个注册的图像帧之间获得差分图像,导致多个差分图像。 然后累积差分图像以计算用于检测突出区域的检测图像。 基于检测到的突出区域产生用于监视使用的对象线索。
    • 5. 发明授权
    • Adaptive multi-modal detection and fusion in videos via classification-based-learning
    • 通过基于分类的学习,视频中的自适应多模态检测和融合
    • US08965115B1
    • 2015-02-24
    • US14100886
    • 2013-12-09
    • HRL Laboratories, LLC
    • Deepak KhoslaAlexander L. HondaYang ChenShinko Y. ChengKyungnam KimLei ZhangChangsoo S. Jeong
    • G06K9/62G06K9/00
    • G06K9/00664G06K9/3241G06K9/6264G06K9/629
    • Described is a system for object detection using classification-based learning. A fusion method is selected, then a video sequence is processed to generate detections for each frame, wherein a detection is a representation of an object candidate. The detections are fused to generate a set of fused detections for each frame. The classification module generates a classification score labeling each fused detection based on a predetermined classification threshold. Otherwise, a token indicating that the classification module has abstained from generating a classification score is generated. The scoring module produces a confidence score for each fused detection based on a set of learned parameters from the learning module and the set of fused detections. The set of fused detections are filtered by the accept-reject module based on one of the classification score or the confidence score. Finally, a set of final detections representing an object is output.
    • 描述了使用基于分类的学习的对象检测系统。 选择融合方法,然后处理视频序列以产生每个帧的检测,其中检测是对象候选的表示。 检测被融合以产生用于每个帧的一组融合检测。 分类模块基于预定分类阈值生成标记每个融合检测的分类分数。 否则,生成表示分类模块已经放弃生成分类分数的令牌。 评分模块基于来自学习模块的一组学习参数和融合检测集合,为每个融合检测产生置信度分数。 基于分类分数或置信度分数之一,接受拒绝模块对融合检测集合进行过滤。 最后,输出一组表示对象的最终检测。
    • 8. 发明授权
    • Robust static and moving object detection system via attentional mechanisms
    • 坚固的静态和移动物体检测系统通过注意机制
    • US09317776B1
    • 2016-04-19
    • US14205349
    • 2014-03-11
    • HRL Laboratories, LLC
    • Alexander L HondaDeepak KhoslaYang ChenKyungnam KimShinko Y. ChengLei ZhangChangsoo S. Jeong
    • G06K9/00G06K9/62G06T7/40G06T7/60
    • G06T5/008G06T2207/10024
    • Described, is a system for object detection via multi-scale attentional mechanisms. The system receives a multi-band image as input. Anti-aliasing and downsampling processes are performed to reduce the size of the multi-band image. Targeted contrast enhancement is performed on the multi-band image to enhance a target color of interest. A response map for each target color of interest is generated, and each response map is independently processed to generate a saliency map. The saliency map is converted into a set of detections representing potential objects of interest, wherein each detection is associated with parameters, such as position parameters, size parameters, an orientation parameter, and a score parameter. A post-processing step is applied to filter out false alarm detections in the set of detections, resulting in a final set of detections. Finally, the final set of detections and their associated parameters representing objects of interest is output.
    • 描述的是一种通过多尺度注意机制进行物体检测的系统。 系统接收多频带​​图像作为输入。 执行抗混叠和下采样处理以减小多频带图像的大小。 在多频带图像上执行目标对比度增强以增强感兴趣的目标颜色。 生成感兴趣的每个目标颜色的响应图,并且每个响应图被独立地处理以产生显着图。 显着性图被转换成代表潜在的感兴趣对象的一组检测,其中每个检测与参数相关联,诸如位置参数,尺寸参数,取向参数和得分参数。 应用后处理步骤来过滤掉该组检测中的错误警报检测,从而产生一组最终的检测。 最后,输出最终的检测集及其相关参数,表示感兴趣的对象。
    • 9. 发明授权
    • Rapid object detection by combining structural information from image segmentation with bio-inspired attentional mechanisms
    • 通过将图像分割的结构信息与生物启发的注意机制相结合,快速对象检测
    • US09147255B1
    • 2015-09-29
    • US13967227
    • 2013-08-14
    • HRL Laboratories, LLC
    • Lei ZhangShinko Y. ChengYang ChenAlexander L. HondaKyungnam KimDeepak KhoslaChangsoo S. Jeong
    • G06K9/34G06T7/00
    • G06T7/0079G06K9/4671G06T7/11G06T7/143G06T7/162G06T2207/10024G06T2207/20072G06T2207/20076
    • Described is a system for rapid object detection combining structural information with bio-inspired attentional mechanisms. The system oversegments an input image into a set of superpixels, where each superpixel comprises a plurality of pixels. For each superpixel, a bounding box defining a region of the input image representing a detection hypothesis is determined. An average residual saliency (ARS) is calculated for the plurality of pixels belonging to each superpixel. Each detection hypothesis that is out of a range of a predetermined threshold value for object size is eliminated. Next, each remaining detection hypothesis having an ARS below a predetermined threshold value is eliminated. Then, color contrast is calculated for the region defined by the bounding box for each remaining detection hypothesis. Each detection hypothesis having a color contrast below a predetermined threshold is eliminated. Finally, the remaining detection hypotheses are output to a classifier for object recognition.
    • 描述了一种用于快速物体检测的系统,其将结构信息与生物启发的注意机制相结合。 该系统将输入图像监视成一组超像素,其中每个超像素包括多个像素。 对于每个超像素,确定定义表示检测假设的输入图像的区域的边界框。 对于属于每个超像素的多个像素计算平均残差显着(ARS)。 消除了超出对象尺寸的预定阈值的范围的每个检测假设。 接下来,消除具有低于预定阈值的ARS的每个剩余检测假设。 然后,对于每个剩余检测假设的边界框定义的区域,计算颜色对比度。 消除了具有低于预定阈值的颜色对比度的每个检测假设。 最后,将剩余的检测假设输出到用于对象识别的分类器。