会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Method and apparatus of a fully-pipelined FFT
    • 全流水线FFT的方法和装置
    • US09418047B2
    • 2016-08-16
    • US14192725
    • 2014-02-27
    • Tensorcom, Inc.
    • Bo LuRicky Lap Kei CheungBo Xia
    • G06F17/14
    • G06F17/142
    • A plurality of three bit units (called triplets) are permuted by a shuffler to shuffle the positions of the triplets into different patterns which are used to specific the read/write operation of a memory. For example, the least significant triplet in a conventional counter can be placed in the most significant position of a permuted three triplet pattern. The count of this permuted counter triplet generates addresses that jump 64 positions each clock cycle. These permutations can then be used to generate read and write control information to read from/write to memory banks conducive for efficient Radix-8 Butterfly operation. In addition, one or more triplets can also determine if a barrel shifter or right circular shift is required to shift data from one data lane to a second data lane. The triplets allow efficient FFT operation in a pipelined structure.
    • 多个三位单元(称为三元组)由洗牌器置换,以将三元组的位置洗牌到用于特定存储器的读/写操作的不同模式中。 例如,常规计数器中的最低有效三重态可以被置于置换的三重态图案的最重要位置。 这个置换的计数器三元组的计数产生每个时钟周期跳转64个位置的地址。 然后可以使用这些排列来产生读/写控制信息,从而有助于高效的“八只蝴蝶”操作从存储器库读/写。 此外,一个或多个三元组还可以确定是否需要桶形移位器或右循环移位来将数据从一个数据通道移动到第二数据通道。 三元组允许在流水线结构中进行有效的FFT运算。
    • 2. 发明授权
    • Method and apparatus of a fully-pipelined layered LDPC decoder
    • 全流水线分层LDPC解码器的方法和装置
    • US09276610B2
    • 2016-03-01
    • US14165505
    • 2014-01-27
    • Tensorcom, Inc.
    • Bo XiaRicky Lap Kei CheungBo Lu
    • H03M13/11
    • H03M13/1145H03M13/1122H03M13/114H03M13/1148H03M13/116
    • The architecture is able to switch to Non-blocking check-node-update (CNU) scheduling architecture which has better performance than blocking CNU scheduling architecture. The architecture uses an Offset Min-Sum with Beta=1 with a clock domain operating at 440 MHz. The constraint macro-matrix is a spare matrix where each “1’ corresponds to a sub-array of a cyclically shifted identity matrix which is a shifted version of an identity matrix. Four core processors are used in the layered architecture where the constraint matrix uses a sub-array of 42 (check nodes)×42 (variable nodes) in the macro-array of 168×672 bits. Pipeline processing is used where the delay for each layer only requires 4 clock cycles.
    • 该架构能够切换到具有比阻塞CNU调度架构更好的性能的非阻塞校验节点更新(CNU)调度体系结构。 该架构使用Beta = 1的偏移最小和,时钟域工作在440 MHz。 约束宏矩阵是备用矩阵,其中每个“1”对应于作为单位矩阵的移位版本的循环移位单位矩阵的子阵列。 在分层架构中使用四个核心处理器,约束矩阵在168×672位的宏阵列中使用42(校验节点)×42(变量节点)的子阵列。 使用管道处理,其中每层的延迟只需要4个时钟周期。
    • 3. 发明申请
    • Method and Apparatus of a Fully-Pipelined Layered LDPC Decoder
    • 全流水线分层LDPC解码器的方法和装置
    • US20150214980A1
    • 2015-07-30
    • US14165505
    • 2014-01-27
    • Tensorcom, Inc.
    • Bo XiaRicky Lap Kei CheungBo Lu
    • H03M13/11
    • H03M13/1145H03M13/1122H03M13/114H03M13/1148H03M13/116
    • The architecture is able to switch to Non-blocking check-node-update (CNU) scheduling architecture which has better performance than blocking CNU scheduling architecture. The architecture uses an Offset Min-Sum with Beta=1 with a clock domain operating at 440 MHz. The constraint macro-matrix is a spare matrix where each “1’ corresponds to a sub-array of a cyclically shifted identity matrix which is a shifted version of an identity matrix. Four core processors are used in the layered architecture where the constraint matrix uses a sub-array of 42 (check nodes)×42 (variable nodes) in the macro-array of 168×672 bits. Pipeline processing is used where the delay for each layer only requires 4 clock cycles.
    • 该架构能够切换到具有比阻塞CNU调度架构更好的性能的非阻塞校验节点更新(CNU)调度体系结构。 该架构使用Beta = 1的偏移最小和,时钟域工作在440 MHz。 约束宏矩阵是备用矩阵,其中每个“1”对应于作为单位矩阵的移位版本的循环移位单位矩阵的子阵列。 在分层架构中使用四个核心处理器,约束矩阵在168×672位的宏阵列中使用42(校验节点)×42(变量节点)的子阵列。 使用管道处理,其中每层的延迟只需要4个时钟周期。
    • 4. 发明申请
    • Method and Apparatus of a Fully-Pipelined Layered LDPC Decoder
    • 全流水线分层LDPC解码器的方法和装置
    • US20160173131A1
    • 2016-06-16
    • US15011252
    • 2016-01-29
    • Tensorcom, Inc.
    • Bo XiaRicky Lap Kei CheungBo Lu
    • H03M13/11
    • H03M13/1145H03M13/1122H03M13/114H03M13/1148H03M13/116
    • Processors are arranged in a pipeline structure to operate on multiple layers of data, each layer comprising multiple groups of data. An input to a memory is coupled to an output of the last processor in the pipeline, and the memory's output is coupled to an input of the first processor in the pipeline. Multiplexing and de-multiplexing operations are performed in the pipeline. For each group in each layer, a stored result read from the memory is applied to the first processor in the pipeline structure. A calculated result of the stored result is output at the last processor and stored in the memory. Once processing for the last group of data in a first layer is completed, the corresponding processor is configured to process data in a next layer before the pipeline finishes processing the first layer. The stored result obtained from the next layer comprises a calculated result produced from a layer previous to the first layer.
    • 处理器被布置在流水线结构中以在多层数据上操作,每层包括多组数据。 存储器的输入耦合到流水线中的最后一个处理器的输出,并且存储器的输出耦合到流水线中的第一处理器的输入。 在流水线中执行多路复用和解复用操作。 对于每个层中的每个组,将从存储器读取的存储结果应用于流水线结构中的第一处理器。 存储结果的计算结果在最后一个处理器处输出并存储在存储器中。 一旦对第一层中的最后一组数据的处理完成,相应的处理器被配置为在管线完成对第一层的处理之前处理下一层中的数据。 从下一层获得的存储结果包括从第一层之前的层产生的计算结果。