会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明专利
    • Combining three-dimensional morphable models
    • GB2582010B
    • 2021-07-28
    • GB201903125
    • 2019-03-08
    • HUAWEI TECH CO LTD
    • STYLIANOS PLOUMPISSTEFANOS ZAFEIRIOU
    • G06T19/20
    • A three-dimensional morphable model, (3DMM) is produced by combining first and second 3DMMs, by generating (2.1) a plurality of first shapes using the first 3DMM, and calculating (2.2) a mapping from second parameters of the second 3DMM to first parameters of the first 3DMM. For each second shape generated using the second 3DMM, a corresponding first shape is generated (2.3). Merged shapes are formed (2.4) by merging each second shape with the corresponding first shape. Principal component analysis is performed (2.5) on the merged shapes to generate the new 3DMM. Also claimed is generating a Gaussian process morphable model (GPMM) by combining first and second 3DMMs (see fig. 4) by registering a mean shape of the first 3DMM to a mean shape of the second 3DMM and a template shape, and projecting points of the template shape onto the mean shapes of the first and/or second 3DMM. A universal covariance matrix for the GPMM is determined based on pairs of points of the template shape projected onto the mean shape of the first and/or second 3DMM, a covariance matrix for each of the first and second 3DMMs. The GPMM is defined based on the universal covariance matrix and a predefined mean deformation.
    • 3. 发明专利
    • Facial landmark localisation system and method
    • GB2576784B
    • 2021-05-19
    • GB201814275
    • 2018-09-03
    • HUAWEI TECH CO LTD
    • JIANKANG DENGSTEFANOS ZAFEIRIOU
    • G06K9/46G06N3/04
    • A neural network system and method for localising facial landmarks in an image comprises two or more convolutional neural networks, 200, in series. Each convolution neural network comprises a plurality of downsampling layers, 211, 212, 213, 224, which downsample the input image signal, and a plurality of upsampling layers 220, 222, 216, 224, 215, 214 which upsample the downsampled signal. During the upsampling of the downsampled signal at an upsampling layer, the upsampled signal is also combined with the signal from a connection to a lateral layer of equal size. Each of the upsampling layers thus aggregates input from the previous lateral layer of equal size with downsampled input from a smaller downsampling layer. At least one of these upsampling layers, 224, further includes an input from a larger (upsampling) layer. Lateral connections may be skip connections or one or more convolutions, e.g. depth-wise separable convolution(s). The input may be a 128x128 pixel input image. The system may comprise a channel aggregation block for each layer of the convolutional neural network, blocks including channel increase, decrease and branch steps. Outputs of each convolutional neural network may be connected to spatial transformers, such as a deformable convolution.
    • 4. 发明专利
    • Generating three-dimensional facial data
    • GB2585708A
    • 2021-01-20
    • GB201910114
    • 2019-07-15
    • HUAWEI TECH CO LTD
    • BARIC GECERSTEFANOS ZAFEIRIOU
    • G06T13/40G06K9/00
    • A computer-implemented method of generating 3D facial data using a generator neural network 102 comprises: inputting, into the generator network, initialization data 104 which comprises noise; processing the initialisation data through the generator network to generate UV maps 106 in a plurality of modalities, the maps comprise a shape map 108, a texture map 110, and a normal map 112; and outputting facial data 114 which comprises the UV maps. The generator neural network comprises: an initial set of network layers 116 to generate feature maps from the initialisation data; a first branch 122 configured to generate the shape map from one or more feature maps; a second branch 124 to generate the texture map from one or more feature maps; and a third branch 126 to generate the normal map from the one or more feature maps. The generator neural network may be trained using a discriminator network (figure 3; 304) of reciprocal structure. The face data produced by the trained generator network may be used to train a facial recognition neural network (figure 5; 504). The initialisation data may comprise expression data.
    • 5. 发明专利
    • Facial localisation in images
    • GB2582833A
    • 2020-10-07
    • GB201906027
    • 2019-04-30
    • HUAWEI TECH CO LTD
    • JIANKANG DENGSTEFANOS ZAFEIRIOU
    • G06K9/00G06N3/04G06N3/08
    • The invention relates to methods of facial localisation in images. According to a first aspect (Fig. 2), there is described a computer-implemented method of training a neural network 106 for face localisation, the method comprising: inputting, to the neural network, a training image 202 comprising one or more faces (104); processing, using the neural network, the training image and outputting for each of a plurality of training anchors in the training image, one or more sets of output data, wherein the output data comprises a predicted facial classification, a predicted location of a corresponding face box (110), and one or more corresponding feature vectors; then updating parameters of the neural network in dependence on an objective function. The objective function comprises, for each positive anchor in the training image, a classification loss 210, a box regression loss 212 and a feature loss 214, based on comparisons of predicted properties with known properties in the anchor. A second aspect (Fig. 1) covers using a neural network to perform face localisation, wherein the network comprises convolutional layers and filters, as well as skip connections.
    • 6. 发明专利
    • Generating three-dimensional facial data
    • GB2585708B
    • 2022-07-06
    • GB201910114
    • 2019-07-15
    • HUAWEI TECH CO LTD
    • BARIC GECERSTEFANOS ZAFEIRIOU
    • G06T17/00G06T13/40G06V40/16
    • A computer-implemented method of generating 3D facial data using a generator neural network 102 comprises: inputting, into the generator network, initialization data 104 which comprises noise; processing the initialisation data through the generator network to generate UV maps 106 in a plurality of modalities, the maps comprise a shape map 108, a texture map 110, and a normal map 112; and outputting facial data 114 which comprises the UV maps. The generator neural network comprises: an initial set of network layers 116 to generate feature maps from the initialisation data; a first branch 122 configured to generate the shape map from one or more feature maps; a second branch 124 to generate the texture map from one or more feature maps; and a third branch 126 to generate the normal map from the one or more feature maps. The generator neural network may be trained using a discriminator network (figure 3; 304) of reciprocal structure. The face data produced by the trained generator network may be used to train a facial recognition neural network (figure 5; 504). The initialisation data may comprise expression data.
    • 7. 发明专利
    • Facial re-enactment
    • GB2596777A
    • 2022-01-12
    • GB202007052
    • 2020-05-13
    • HUAWEI TECH CO LTD
    • STEFANOS ZAFEIRIOURAMI KOUJANMICHAIL-CHRISTOS DOUKAS
    • G06T13/40G06K9/00
    • A first plurality of sequential source face coordinate images and gaze tracking images are generated based on a plurality of source input images of a first source subject or actor 12. The first plurality of source face coordinate images comprise source identity parameters and source expression parameters, wherein the source expression parameters are represented as offsets from the source identity parameters. A plurality of sequential target face coordinate images and gaze tracking images of the first target subject or actor 14 are generated based on a plurality of target input images of a first target subject. Using a first neural network, a plurality of sequential output images 16 are generated using a mapping module 10, wherein the plurality of sequential output images are based on a mapping of the source expression parameters and the source gaze tracking images on the target identity parameters. The neural network may be a generative adversarial network trained to generate the output images by inputting the source and target coordinate and gaze tracking images. A loss function may be used based on ground truth inputs and the sequential output images.
    • 8. 发明专利
    • Method of training an image classification model
    • GB2592076A
    • 2021-08-18
    • GB202002157
    • 2020-02-17
    • HUAWEI TECH CO LTD
    • JIANKANG DENGSTEFANOS ZAFEIRIOU
    • G06K9/62G06N3/08
    • Method of training a neural network classifier, comprising: extracting from the neural network 204 a plurality of subclass centre vectors 210; generating an embedding vector 206 from an input image 202; determining a similarity score 212 between the embedding vector and each of the subclass centre vectors; updating neural network parameters 214 in dependence each of the similarity scores using an objective function; and extracting and updating each subclass centre vector from the neural network. Subclass centre vectors may be extracted from the neural network’s last fully connected layer. The objective function may comprise a multi-centre loss term such as a margin-based Softmax loss function, that determines, for the embedding vector, a closest subclass centre vector for each class using the similarity scores. The embedding vector and subclass centre vectors may be normalised, and the similarity score an angle therebetween. The objective function may comprise an intra-class compactness term that uses the intra-class normalized angle similarity score between sub-class a dominant vector and the other sub-classes. Non-dominant subclasses may be discarded after a number of training cycles have been run. The method may be used to train a neural network on face images that contain noise (wrong labels).
    • 9. 发明专利
    • Facial behaviour analysis
    • GB2588747A
    • 2021-05-12
    • GB201909300
    • 2019-06-28
    • HUAWEI TECH CO LTD
    • STEFANOS ZAFEIRIOUDIMITRIOS KOLLIAS
    • G06N3/02G06K9/00
    • This invention relates to methods for analysing facial behaviour in facial images using neural networks (NN). A first aspect covers a method of training a NN 212, the method comprising: inputting to the NN a plurality of facial images, the images comprising: one or more images from a first dataset 202-1 wherein each image comprises a known emotion label 204 (e.g. happy, sad); and one or more images from a second dataset 201-2 wherein each image comprises a known action unit activation 206 (e.g. wrinkled nose, raised eyebrow). The method further comprises: generating for each of the images using the NN, a predicted emotion label 214 and predicted action unit activation 216; then updating parameters of the NN in dependence on a comparison of the predicted and known emotion labels, and the predicted and known action unit activations. The comparison could be performed using a multi-task function 222-226 for calculating losses, e.g. cross entropy loss. Further images 202-3 having known arousal or valence values 208 could also be used. A second aspect covers using the trained NN to classify facial images.
    • 10. 发明专利
    • Facial shape representation and generation system and method
    • GB2581524B
    • 2022-05-11
    • GB201902459
    • 2019-02-22
    • HUAWEI TECH CO LTD
    • STYLIANOS MOSCHOGLOUSTEFANOS ZAFEIRIOU
    • G06T17/20G06K9/62G06N3/04G06N3/08G06T9/00G06V10/70G06V40/10
    • A method of training a generative adversarial network, GAN 200, for generation of a 3D face surface. The GAN 200 comprises a generator neural network 210 and a discriminator neural network 110. The method includes pre-training the discriminator network and then jointly training the generator network and the discriminator network. Pre-training the discriminator network comprises processing a pre-training set of input facial data and updating parameters of the discriminator network. Jointly training the generator network and the discriminator network comprises initialising the networks, processing a training set of input facial data using the generator; processing the training set and generator outputs using the discriminator; updating parameters of the generator based on the generator outputs and associated discriminator outputs; and updating parameters of the discriminator based on the training set, the generator outputs and the discriminator outputs for the generator outputs. The generator and discriminator may comprise an encoder and a decoder portion. The encoder of the networks may comprise a bottleneck layer 212, 214 which extracts a bottleneck 213 for each input face 202 and samples a bottleneck for the generated face based on the mean and covariance of the bottlenecks for the whole training dataset.