会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 11. 发明专利
    • 3D face reconstruction system and method
    • GB2581374B
    • 2022-05-11
    • GB201902067
    • 2019-02-14
    • HUAWEI TECH CO LTD
    • BARIS GECERSTEFANOS ZAFEIRIOU
    • G06T15/20G06T15/04G06T17/20G06V10/74G06V10/75G06V40/16
    • A computer implemented method 500 for generating a 3D facial reconstruction based on an input facial image is presented. The method comprises generating an initial reconstruction 510 and iteratively updating 520 the reconstruction. The initial reconstruction is generated by generating 512 a facial mesh, using a shape model, and a texture map 514. The reconstruction is iteratively updated by using a differentiable renderer to project 522 the 3D reconstruction onto a 2D plane to form a rendered facial image, calculating 524 a loss function based on a comparison of the rendered facial image and the input facial image, and using gradient descent to generate 526 an updated facial reconstruction. The facial mesh may use an expression model (figure 1, 102) and the projection may be affected by illumination (figure 1, 103) and camera (figure 1, 104) parameters. The generation of the reconstruction may comprise the use of a generative adversarial network (GAN). The loss function calculation may use a comparison of the presence and/or position of facial landmarks between the input and rendered images. The loss function calculation may use the pixel-to-pixel dissimilarity (figure 3, 320) between the images. There may be more than one input image.
    • 12. 发明专利
    • Facial localisation in images
    • GB2582833B
    • 2021-04-07
    • GB201906027
    • 2019-04-30
    • HUAWEI TECH CO LTD
    • JIANKANG DENGSTEFANOS ZAFEIRIOU
    • G06K9/00G06N3/04G06N3/08
    • The invention relates to methods of facial localisation in images. According to a first aspect (Fig. 2), there is described a computer-implemented method of training a neural network 106 for face localisation, the method comprising: inputting, to the neural network, a training image 202 comprising one or more faces (104); processing, using the neural network, the training image and outputting for each of a plurality of training anchors in the training image, one or more sets of output data, wherein the output data comprises a predicted facial classification, a predicted location of a corresponding face box (110), and one or more corresponding feature vectors; then updating parameters of the neural network in dependence on an objective function. The objective function comprises, for each positive anchor in the training image, a classification loss 210, a box regression loss 212 and a feature loss 214, based on comparisons of predicted properties with known properties in the anchor. A second aspect (Fig. 1) covers using a neural network to perform face localisation, wherein the network comprises convolutional layers and filters, as well as skip connections.
    • 13. 发明专利
    • Method of training an image classification model
    • GB2592076B
    • 2022-09-07
    • GB202002157
    • 2020-02-17
    • HUAWEI TECH CO LTD
    • JIANKANG DENGSTEFANOS ZAFEIRIOU
    • G06V10/77G06K9/62G06N3/08
    • Method of training a neural network classifier, comprising: extracting from the neural network 204 a plurality of subclass centre vectors 210; generating an embedding vector 206 from an input image 202; determining a similarity score 212 between the embedding vector and each of the subclass centre vectors; updating neural network parameters 214 in dependence each of the similarity scores using an objective function; and extracting and updating each subclass centre vector from the neural network. Subclass centre vectors may be extracted from the neural network’s last fully connected layer. The objective function may comprise a multi-centre loss term such as a margin-based Softmax loss function, that determines, for the embedding vector, a closest subclass centre vector for each class using the similarity scores. The embedding vector and subclass centre vectors may be normalised, and the similarity score an angle therebetween. The objective function may comprise an intra-class compactness term that uses the intra-class normalized angle similarity score between sub-class a dominant vector and the other sub-classes. Non-dominant subclasses may be discarded after a number of training cycles have been run. The method may be used to train a neural network on face images that contain noise (wrong labels).
    • 15. 发明专利
    • Facial behaviour analysis
    • GB2588747B
    • 2021-12-08
    • GB201909300
    • 2019-06-28
    • HUAWEI TECH CO LTD
    • STEFANOS ZAFEIRIOUDIMITRIOS KOLLIAS
    • G06K9/62G06K9/00G06N3/02
    • This invention relates to methods for analysing facial behaviour in facial images using neural networks (NN). A first aspect covers a method of training a NN 212, the method comprising: inputting to the NN a plurality of facial images, the images comprising: one or more images from a first dataset 202-1 wherein each image comprises a known emotion label 204 (e.g. happy, sad); and one or more images from a second dataset 201-2 wherein each image comprises a known action unit activation 206 (e.g. wrinkled nose, raised eyebrow). The method further comprises: generating for each of the images using the NN, a predicted emotion label 214 and predicted action unit activation 216; then updating parameters of the NN in dependence on a comparison of the predicted and known emotion labels, and the predicted and known action unit activations. The comparison could be performed using a multi-task function 222-226 for calculating losses, e.g. cross entropy loss. Further images 202-3 having known arousal or valence values 208 could also be used. A second aspect covers using the trained NN to classify facial images.
    • 16. 发明专利
    • Combining three-dimensional morphable models
    • GB2582047B
    • 2021-03-31
    • GB201917826
    • 2019-03-08
    • HUAWEI TECH CO LTD
    • STYLIANOS PLOUMPISSTEFANOS ZAFEIRIOU
    • G06T19/20
    • A Gaussian process morphable model (GPMM) is generated by combining first and second 3DMMs (see fig. 4) by registering a mean shape of the first 3DMM to a mean shape of the second 3DMM and a template shape, and projecting template shape points onto the mean shapes of the first and/or second 3DMM. A universal covariance matrix for the GPMM is determined based on pairs of template shape points projected onto the mean shape of the first and/or second 3DMM, and a covariance matrix for each of the first and second 3DMMs. The GPMM is defined based on the universal covariance matrix and a predefined mean deformation. Also disclosed is producing a three-dimensional morphable model, (3DMM) by combining first and second 3DMMs, by generating (2.1) a plurality of first shapes using the first 3DMM, and calculating (2.2) a mapping from second parameters of the second 3DMM to first parameters of the first 3DMM. For each second shape generated using the second 3DMM, a corresponding first shape is generated (2.3). Merged shapes are formed (2.4) by merging each second shape with the corresponding first shape. Principal component analysis is performed (2.5) on the merged shapes to generate the new 3DMM.