会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 3. 发明申请
    • Multistage Learner for Efficiently Boosting Large Datasets
    • 多级学习者有效提升大型数据集
    • US20150186795A1
    • 2015-07-02
    • US14142977
    • 2013-12-30
    • Google Inc.
    • Tushar Deepak ChandraTal ShakedYoram SingerTze Way Eugene IeJoshua Redstone
    • G06N99/00
    • G06N99/005
    • Implementations of the disclosed subject matter provide methods and systems for using a multistage learner for efficiently boosting large datasets in a machine learning system. A method may include obtaining a first plurality of examples for a machine learning system and selecting a first point in time. Next, a second point in time occurring subsequent to the first point in time may be selected. The machine learning system may be trained using m of the first plurality of examples. Each of the m examples may include a feature initially occurring after the second point in time. In addition, the machine learning system may be trained using n of the first plurality of examples, and each of the n examples may include a feature initially occurring after the first point in time.
    • 所公开的主题的实现提供了使用多级学习者有效地提升机器学习系统中的大型数据集的方法和系统。 方法可以包括获得机器学习系统的第一多个示例并选择第一时间点。 接下来,可以选择在第一时间点之后发生的第二时间点。 可以使用第一多个示例的m来训练机器学习系统。 m个示例中的每一个可以包括最初在第二时间点之后发生的特征。 此外,可以使用第一多个示例中的n来训练机器学习系统,并且n个示例中的每个示例可以包括最初在第一时间点之后发生的特征。
    • 5. 发明授权
    • Parallel processing of data
    • 并行处理数据
    • US09536014B1
    • 2017-01-03
    • US14922552
    • 2015-10-26
    • Google Inc.
    • Kenneth J. GoldmanTushar Deepak ChandraTal ShakedYonggang Zhao
    • G06F9/46G06F17/30
    • G06F17/30371G06F9/5066G06F9/544G06F17/30321G06F17/30554G06F17/30584G06F17/30917H04L67/1097
    • Parallel processing of data may include a set of map processes and a set of reduce processes. Each map process may include at least one map thread. Map threads may access distinct input data blocks assigned to the map process, and may apply an application specific map operation to the input data blocks to produce key-value pairs. Each map process may include a multiblock combiner configured to apply a combining operation to values associated with common keys in the key-value pairs to produce combined values, and to output intermediate data including pairs of keys and combined values. Each reduce process may be configured to access the intermediate data output by the multiblock combiners. For each key, an application specific reduce operation may be applied to the combined values associated with the key to produce output data.
    • 数据的并行处理可以包括一组地图处理和一组缩减过程。 每个地图过程可以包括至少一个地图线程。 映射线程可以访问分配给映射过程的不同输入数据块,并且可以将应用特定映射操作应用于输入数据块以产生键值对。 每个映射过程可以包括多块组合器,其被配置为将组合操作应用于与键值对中的公共密钥相关联的值以产生组合值,以及输出包括密钥对和组合值的中间数据。 每个减少处理可以被配置为访问由多块组合器输出的中间数据。 对于每个密钥,可以将应用特定的减少操作应用于与密钥相关联的组合值以产生输出数据。
    • 7. 发明授权
    • Multistage learner for efficiently boosting large datasets
    • 用于有效提升大型数据集的多级学习者
    • US09418343B2
    • 2016-08-16
    • US14142977
    • 2013-12-30
    • Google Inc.
    • Tushar Deepak ChandraTal ShakedYoram SingerTze Way Eugene IeJoshua Redstone
    • G06N3/04G06N99/00
    • G06N99/005
    • Implementations of the disclosed subject matter provide methods and systems for using a multistage learner for efficiently boosting large datasets in a machine learning system. A method may include obtaining a first plurality of examples for a machine learning system and selecting a first point in time. Next, a second point in time occurring subsequent to the first point in time may be selected. The machine learning system may be trained using m of the first plurality of examples. Each of the m examples may include a feature initially occurring after the second point in time. In addition, the machine learning system may be trained using n of the first plurality of examples, and each of the n examples may include a feature initially occurring after the first point in time.
    • 所公开的主题的实现提供了使用多级学习者有效地提升机器学习系统中的大型数据集的方法和系统。 方法可以包括获得机器学习系统的第一多个示例并选择第一时间点。 接下来,可以选择在第一时间点之后发生的第二时间点。 可以使用第一多个示例的m来训练机器学习系统。 m个示例中的每一个可以包括最初在第二时间点之后发生的特征。 此外,可以使用第一多个示例中的n来训练机器学习系统,并且n个示例中的每个示例可以包括最初在第一时间点之后发生的特征。
    • 10. 发明授权
    • Efficient locking of large data collections
    • 高效锁定大型数据收集
    • US09569481B1
    • 2017-02-14
    • US14101611
    • 2013-12-10
    • Google Inc.
    • Tushar Deepak ChandraTal ShakedYoram SingerTze Way Eugene IeJoshua Redstone
    • G06F7/00G06F17/30
    • G06F17/30371
    • The present disclosure provides systems and techniques for efficient locking of datasets in a database when updates to a dataset may be delayed. A method may include accumulating a plurality of updates to a first set of one or more values associated with one or more features. The first set of one or more values may be stored within a first database column. Next, it may be determined that a first database column update aggregation rule is satisfied. A lock assigned to at least a portion of at least a first database column may be acquired. Accordingly, one or more values in the first set within the first database column may be updated based on the plurality of updates. In an implementation, the first set of one or more values may be associated with the first lock.
    • 本公开提供了用于在数据库的更新可能被延迟时有效地将数据集锁定在数据库中的系统和技术。 方法可以包括将多个更新累积到与一个或多个特征相关联的一个或多个值的第一组中。 第一组一个或多个值可以存储在第一数据库列中。 接下来,可以确定满足第一数据库列更新聚合规则。 可以获取分配给至少第一数据库列的至少一部分的锁。 因此,可以基于多个更新来更新第一数据库列中的第一集合中的一个或多个值。 在实现中,第一组一个或多个值可以与第一锁相关联。