Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice to Applicant
This communication is filed in response to the action filed on 12/28/2023.
The claims 1-16 are pending.
Information Disclosure Statement
The information disclosure statements (IDS’s) filed on 12/28/2023; and 05/03/2024 have been fully considered.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-2, 4-6, 8-9, 11-13, 15-16 are rejected under 35 § U.S.C. 102(a)(1) as being anticipated by US 2022/0156987 A1 to CHANDRAN et al. (hereinafter “CHANDRAN”).
As per claim 1, CHANDRAN discloses a method of operating a device for extracting a style-based sketch (a system and related method of using the system in order to extract a style based sketch from input image data; abstract; figs 2-3; paragraphs [0023], [0027-0028]), the method comprising: inputting a color image for extracting a sketch by a first attention-based convolutional layer (a style transfer occurs between an input content sample and a generated style sample this is done by a convolutional neural network comprising convolutional layers adapted for extracting sketches from content wherein content comprises video frames/images; abstract; fig 1; paragraphs [0017-0019], [0027-0028], [0033], [0039], [0044], [0046], [0089]); inputting a reference image comprising a style of the sketch by a second attention-based convolutional layer (the content sample is input into the system as a input image comprising content features may then be combined with the content of the content sample to produce a style transfer result that "adapts" the content in the content sample to the style of the style sample the style sample result being a sketch; figs 2-3; paragraphs [0007], [0023], [0025-0028], [0082]); inputting a sum of attention information output from each of the first attention-based convolutional layer and the second attention-based convolutional layer by a convolutional layer for generating the style-based sketch (the convolutional layer structure includes decoder 206, and encoders 202 and 204 of a pre trained CNN to generate feature embeddings Fc and Fs from the respective content and style sample to convert different types of data, 2D image data and 3D mesh data into feature embeddings Fc and Fs, the trained encoders are trained using a training engine 122 which calculates an overall loss as a weighted sum of the style loss and content loss and use gradient descent and backpropagation to update parameters of the kernel predictor and decoder network in a way that reduces the overall loss and loss is considered to be substantially similar to attention information; figs 2, paragraphs [0029-0031], [0038], [0042], [0052]); inputting a sum of outputs of the first attention-based convolutional layer and the second attention-based convolutional layer by a third attention-based convolutional layer for normalization of the style-based sketch (inputting to objective function 212 which includes/produces a weighted sum value, a loss function that includes the sum of style loss 232 multiplied by one coefficient and content loss 234 multiplied by another coefficient, where the coefficients may sum to 1, and each coefficient may be selected to increase or decrease the presence of style-based attributes 238 and content-based attributes 240 in decoder output 210 of the generated style content and the generated content comprises a weighted sum of a style loss between a third latent representation of the style transfer result and the first latent representation of the style sample and a content loss between the third latent representation of the style transfer result and the second latent representation of the content sample; figs 1-3; paragraphs [0042-0046], [0084]); and extracting a sketch image corresponding to the color image by inputting a sum of outputs of the convolutional layer for generating the style-based sketch and the third attention-based convolutional layer to a decoder (and the computing system is adapted to conclude by extracting a plurality of content-based attributes 240 of content sample 226 and style-based attributes 238 which include color based images of the input content sample and sketch/color drawings of style sample 230 into a style transfer result 236 which is a sketch once output/generated as style sample 230, which is stated to be one of a drawing, painting, sketch, rendering, photograph, and/or another 2D or 3D depiction that is different from content sample 226 but is based on extracted content attributes 240 extracted using execution engine 124 which includes style transfer model 200 comprising encoders 202, 204, a kernel predictor 220, and a decoder 206 of a convolutional neural network CNN; figs 2-3; paragraphs [0025-0029]).
As per claim 2, CHANDRAN discloses the method of claim 1, wherein the third attention-based convolutional layer is configured to be trained using a loss function of a blank between a sketch image of the color image based on the style and a sketch output from the decoder (the style transfer model 200 is a convolutional neural network comprising convolutional layers which are trained via training engine 122 which updates the weighted parameters of the convolution layers of model 200 using a loss function which computes style loss 232 calculated as a measure of distance using, cosine similarity, Euclidean distance, etc.. between latent representations 218 and 242 and content loss 234 represents a difference between latent representation 242 and latent representation 216 of the generated style and content input in the resulting content/style transfer output by the system; NOTE: the spec defines “blank” at paragraphs [0045] as meaning “a blank of the output sketch image as a loss function so that the sketch image is output without any noise” please see paragraph [0067] of CHANDRAN operation 502 generates conventional kernels at each layer of the synthesis model/network and is without noise or blank until steps 504 and 506 when Gaussian noise is input to the model; paragraphs [0038-0046], [0067]).
As per claim 4, CHANDRAN discloses the method of claim 1, wherein, based on a pair of a training color image and a sketch image of the training color image based on the style, at least one of the first attention-based convolutional layer, the second attention-based convolutional layer, or the third attention-based convolutional layer is trained (the training engine 122 is adapted to perform training for the respective convolution neural network CNN layers using training engine 122 to randomly select the training style sample 228 from a set of training style samples in the training data set, and training engine 122 also randomly selects the training content sample 224 from a set of training content samples in the training data, wherein the samples include color images as content samples and a sketch representation of the color images as style samples and further are selected in pairs of training data from training data 214; paragraphs [0038-0040], [0045-0046], [0050-0053]).
As per claim 5, CHANDRAN discloses the method of claim 1, wherein the extracting of the sketch image comprises outputting, as an image, information indicating that a sketch of the color image based on a style of the reference image is extracted (the extracted style based attributes and content based attributes represent extracted attributes of the style content which is generated from the content input into the system which comprises color based reference images of a scene and the style content produced from the style attributes represents a generated style transfer as a sketch or drawing based on the input content reference image; abstract; fig 2, 4; paragraphs [0007], [0027-0028], [0046], [0051]).
As per claim 6, CHANDRAN discloses a method of training a neural network for extracting a style-based sketch (a system and related method of using the system in order to extract a style based sketch from input image data; abstract; figs 2-3; paragraphs [0023], [0027-0028]), the method comprising: inputting a training color image and a reference sketch image of the training color image to the neural network (using training engine 122 the system is adapted train a model to perform a style transfer occurs between an input content sample and a generated style sample this is done by a convolutional neural network adapted for extracting sketched from content wherein content comprises video frames/images and the model is trained using the prior mentioned engine 122 using training content samples 224 and training style samples 228 as training data 214 to create model 200; abstract; fig 1-2; paragraphs [0017-0019], [0023], [0025-0028], [0038-0039]); extracting a sketch image in a style of the reference sketch image of the training color image from the neural network (extracting a plurality of content-based attributes 240 of content sample 226 and style-based attributes 238 which include color based images of the input content sample and sketch/color drawings of style sample 230 into a style transfer result 236 which is a sketch, and stated to be one of a drawing, painting, sketch, rendering, photograph, and/or another 2D or 3D depiction that is different from content sample 226 but is based on extracted content attributes 240 extracted using execution engine 124 which includes style transfer model 200; abstract; fig 1-2; paragraphs [0017-0019], [0023], [0025-0028], [0038-0039]); calculating a loss function between the reference sketch image and the sketch image (calculating loss using objective function 212 which is a loss function the style transfer model 200 is a convolutional neural network comprising convolutional layers which are trained via training engine 122 which updates the weighted parameters of the convolution layers of model 200 using a loss function which computes style loss 232 calculated as a measure of distance using, cosine similarity, Euclidean distance, etc.. between latent representations 218 and 242 and content loss 234 represents a difference between latent representation 242 and latent representation 216; paragraphs [0042-0045]); and training at least one of a first attention-based convolutional layer, a second attention-based convolutional layer, or a third attention-based convolutional layer constituting the neural network using the loss function (using the calculated style loss 232 and content loss 234 training engine 122 trains the model 200 which includes the convolutional neural network CNN having three or more layers, encoder layers 202, 204, and 208, and a decoder layer 206 which are all trained via the engine; figs 1-2; paragraphs [0039-0040], [0042-0046]).
As per claim 8, CHANDRAN discloses the method of claim 6, wherein the calculating of the loss function between the reference sketch image and the sketch image comprises calculating the loss function between a blank of the reference sketch image and a blank of the sketch image, and the training of the at least one attention-based convolutional layer comprises, by the third attention-based convolutional layer for normalizing the blank of the sketch image, training at least one of the first attention-based convolutional layer, the second attention-based convolutional layer, or the third attention-based convolutional layer by inputting the calculated loss function between the blanks (the style transfer model 200 is a convolutional neural network comprising convolutional layers which are trained via training engine 122 which updates the weighted parameters of the convolution layers of model 200 using a loss function which computes style loss 232 calculated as a measure of distance using, cosine similarity, Euclidean distance, etc.. between latent representations 218 and 242 and content loss 234 represents a difference between latent representation 242 and latent representation 216 of the generated style and content input in the resulting content/style transfer output by the system; NOTE: the spec defines “blank” at paragraphs [0045] as meaning “a blank of the output sketch image as a loss function so that the sketch image is output without any noise” please see paragraph [0067] of CHANDRAN operation 502 generates conventional kernels at each layer of the synthesis model/network and is without noise or blank until steps 504 and 506 when Gaussian noise is input to the model; paragraphs [0038-0046], [0067]).
As per claim 9, CHANDRAN discloses the method of claim 6, wherein the calculating of the loss function between the reference sketch image and the sketch image comprises calculating a difference between pieces of feature information extracted from the reference sketch image and the sketch image (the objective function 212 is a loss function and calculates the style loss 232 and the content loss 234 between the input style sample and content sample to the model 200 comprising the CNN; figs 1-2; paragraphs [0039-0040], [0042-0046]), and the training of the at least one attention-based convolutional layer comprises training at least one of the first attention-based convolutional layer, the second attention-based convolutional layer, or the third attention-based convolutional layer in reference to the calculated difference between the pieces of the feature information (after training engine 122 has completed training of style transfer model 200 which includes encoder 202, encoder, 204, encoder, 208, and decoder 206 which is three convolutional layers of the convolution neural network CNN of model 200, execution engine 124 may execute the trained style transfer model 200 to produce style transfer result 236 from a new content sample 226 and style sample 230; fig 2; paragraphs [0039-0046]).
As per claim 11, CHANDRAN discloses a non-transitory computer-readable storage medium storing instructions that, when executed by a processor, cause the processor to perform the method of claim 1 (the system includes computer processing components such as a processor and memory to store and execute instructions, programs, and data of the system and corresponding method; paragraph [0070]).
As per claim 12, CHANDRAN discloses a device for extracting a style-based sketch, the device comprising: at least one processor (a system and related method of using the system in order to extract a style based sketch from input image data the system comprises a computer processor; abstract; figs 2-3; paragraphs [0022-0023], [0027-0028]); a memory (further comprising a memory to store instructions, programs, and data; paragraphs [0021-0022]); and at least one program stored in the memory and configured to be executed by the at least one processor, wherein the program is configured to execute (and programs related to the method performed stored in the memory and executable via the processor of the system; abstract; figs 2-3; paragraphs [0017-0018], [0021-0023], [0027-0028]): inputting a color image for extracting a sketch by a first attention-based convolutional layer (a style transfer occurs between an input content sample and a generated style sample this is done by a convolutional neural network comprising convolutional layers adapted for extracting sketched from content wherein content comprises video frames/images; abstract; fig 1; paragraphs [0017-0019], [0027-0028], [0033], [0039], [0044], [0046], [0089]); inputting a reference image comprising a style of the sketch by a second attention- based convolutional layer (the content sample is input into the system as a input image comprising content features may then be combined with the content of the content sample to produce a style transfer result that "adapts" the content in the content sample to the style of the style sample the style sample result being a sketch; figs 2-3; paragraphs [0007], [0023], [0025-0028], [0082]); inputting a sum of attention information output from each of the first attention- based convolutional layer and the second attention-based convolutional layer by a convolutional layer for generating the style-based sketch (the convolutional layer structure includes decoder 206, and encoders 202 and 204 of a pre trained CNN to generate feature embeddings Fc and Fs from the respective content and style sample to convert different types of data, 2D image data and 3D mesh data into feature embeddings Fc and Fs, the trained encoders are trained using a training engine 122 which calculates an overall loss as a weighted sum of the style loss and content loss and use gradient descent and backpropagation to update parameters of the kernel predictor and decoder network in a way that reduces the overall loss and loss is considered to be substantially similar to attention information; figs 2, paragraphs [0029-0031], [0038], [0042], [0052]); inputting a sum of outputs of the first attention-based convolutional layer and the second attention-based convolutional layer by a third attention-based convolutional layer for normalization of the style-based sketch (inputting to objective function 212 which includes/produces a weighted sum value, a loss function that includes the sum of style loss 232 multiplied by one coefficient and content loss 234 multiplied by another coefficient, where the coefficients may sum to 1, and each coefficient may be selected to increase or decrease the presence of style-based attributes 238 and content-based attributes 240 in decoder output 210 of the generated style content and the generated content comprises a weighted sum of a style loss between a third latent representation of the style transfer result and the first latent representation of the style sample and a content loss between the third latent representation of the style transfer result and the second latent representation of the content sample; figs 1-3; paragraphs [0042-0046], [0084]); and extracting a sketch image corresponding to the color image by inputting a sum of outputs of the convolutional layer for generating the style-based sketch and the third attention- based convolutional layer to a decoder (and the computing system is adapted to conclude by extracting a plurality of content-based attributes 240 of content sample 226 and style-based attributes 238 which include color based images of the input content sample and sketch/color drawings of style sample 230 into a style transfer result 236 which is a sketch once output/generated as style sample 230, which is stated to be one of a drawing, painting, sketch, rendering, photograph, and/or another 2D or 3D depiction that is different from content sample 226 but is based on extracted content attributes 240 extracted using execution engine 124 which includes style transfer model 200 comprising encoders 202, 204, a kernel predictor 220, and a decoder 206 of a convolutional neural network CNN; figs 2-3; paragraphs [0025-0029]).
As per claim 13, CHANDRAN discloses the device of claim 12, wherein the third attention-based convolutional layer is configured to be trained using a loss function of a blank between a sketch image of the color image based on the style and a sketch output from the decoder (the style transfer model 200 is a convolutional neural network comprising convolutional layers which are trained via training engine 122 which updates the weighted parameters of the convolution layers of model 200 using a loss function which computes style loss 232 calculated as a measure of distance using, cosine similarity, Euclidean distance, etc.. between latent representations 218 and 242 and content loss 234 represents a difference between latent representation 242 and latent representation 216 of the generated style and content input in the resulting content/style transfer output by the system; NOTE: the spec defines “blank” at paragraphs [0045] as meaning “a blank of the output sketch image as a loss function so that the sketch image is output without any noise” please see paragraph [0067] of CHANDRAN operation 502 generates conventional kernels at each layer of the synthesis model/network and is without noise or blank until steps 504 and 506 when Gaussian noise is input to the model; paragraphs [0038-0046], [0067]).
As per claim 15, CHANDRAN discloses the device of claim 12, wherein, based on a pair of a training color image and a sketch image of the training color image based on the style, at least one of the first attention-based convolutional layer, the second attention-based convolutional layer, or the third attention-based convolutional layer is trained (the training engine 122 is adapted to perform training for the respective convolution neural network CNN layers using training engine 122 to randomly select the training style sample 228 from a set of training style samples in the training data set, and training engine 122 also randomly selects the training content sample 224 from a set of training content samples in the training data, wherein the samples include color images as content samples and a sketch representation of the color images as style samples and further are selected in pairs of training data from training data 214; paragraphs [0038-0040], [0045-0046], [0050-0053]).
As per claim 16, CHANDRAN discloses the device of claim 12, wherein the extracting of the sketch image comprises outputting, as an image, information indicating that a sketch of the color image based on a style of the reference image is extracted (the extracted style based attributes and content based attributes represent extracted attributes of the style content which is generated from the content input into the system which comprises color based reference images of a scene and the style content produced from the style attributes represents a generated style transfer as a sketch or drawing based on the input content reference image; abstract; fig 2, 4; paragraphs [0007], [0027-0028], [0046], [0051]).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
Claims 3, 7, 10, and 14 are rejected under 35 § U.S.C. 103 as being obvious over US 2022/0156987 A1 to CHANDRAN et al. (hereinafter “CHANDRAN”) in view of US 2022/0108542 A1 to ZHANG et al. (hereinafter “ZHANG”).
As per claim 3, CHANDRAN discloses the method of claim 1. CHANDRAN fails to disclose wherein the inputting of the reference image comprising the style of the sketch by the second attention-based convolutional layer comprises inputting a reverse image or a rotated image of the reference image.
ZHANG discloses wherein the inputting of the reference image comprising the style of the sketch by the second attention-based convolutional layer comprises inputting a reverse image or a rotated image of the reference image (a system for style transfer of an input image region to a style transfer target image X using a stylistic reference image and a content image the region of interest for the style transfer is found and rotated to its optimal rotation angle the image region is fused (input) with the image region in the original image using the correction/style/segmentation models and the models comprises convolutional layers to perform this segmentation and rotation/correction process and is performed via a GAN; abstract; figs 11-12 and 14A-15; paragraphs [0079-0083], [0118], [0170-0175], [0184], [0219-0235]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify CHANDRAN to have inputting a reverse image or a rotated image of the reference image of ZHANG reference. The Suggestion/motivation for doing so would have been to as stated by ZHANG at paragraph [0219] that in order to make the model better apply to the terminal devices, using the object segmentation model, the third layer in the pyramid pooling module may be removed and after removing this layer, the multi-scale information on the PSPNET is also retained to ensure the segmentation accuracy, and simultaneously the network structure is reduced to make the model run faster on the terminal, so that it may run on more types of terminal devices including a mobile terminal device. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine ZHANG with CHANDRAN to obtain the invention as specified in claim 3.
As per claim 7, CHANDRAN in view of ZHANG discloses the method of claim 6. CHANDRAN fails to disclose wherein the inputting of the training color image and the reference sketch image of the training color image to the neural network comprises inputting the reference sketch image that is reversed or rotated to the neural network.
ZHANG discloses wherein the inputting of the training color image and the reference sketch image of the training color image to the neural network comprises inputting the reference sketch image that is reversed or rotated to the neural network (before the object to be processed is segmented through the object segmentation model which includes the GAN (neural network), a detection module performing target object detection and an object rotation angle prediction and correction module are applied before input into the object segmentation model object rotation angle prediction and correction module predicts the rotation angle of the object in the region, and performs the rotation correction on the input image based on the rotation angle to obtain a corrected image, and then detects the object position in the corrected image again to obtain the object region as seen in FIG. 12 and is then input into style transfer model and segmentation model which both comprise neural networks and performs the style transfer operation; figs 11-12 and 14A-15; paragraphs [0079-0083], [0118], [0170-0175], [0184], [0219-0233]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify CHANDRAN to have inputting the reference sketch image that is reversed or rotated to the neural network of ZHANG reference. The Suggestion/motivation for doing so would have been to as stated by ZHANG at paragraph [0219] that in order to make the model better apply to the terminal devices, using the object segmentation model, the third layer in the pyramid pooling module may be removed and after removing this layer, the multi-scale information on the PSPNET is also retained to ensure the segmentation accuracy, and simultaneously the network structure is reduced to make the model run faster on the terminal, so that it may run on more types of terminal devices including a mobile terminal device. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine ZHANG with CHANDRAN to obtain the invention as specified in claim 7.
As per claim 10, CHANDRAN in view of ZHANG discloses the method of claim 6 and the training of the at least one attention-based convolutional layer comprises training at least one of the first attention-based convolutional layer, the second attention-based convolutional layer, or the third attention-based convolutional layer in further reference to the adversarial loss (the training engine 122 adapted to train model 200 which includes CNN 200 and includes convolutional layers encoder 202, encoder 204, encoder 208, and decoder 206 which are all trained using engine 122 and objective function 212 which is a loss function and would be trained using adversarial loss provided by ZHANG; paragraphs [0039-0046]). CHANDRAN fails to disclose wherein the calculating of the loss function between the reference sketch image and the sketch image comprises obtaining an adversarial loss output from a discriminator of the neural network.
ZHANG discloses wherein the calculating of the loss function between the reference sketch image and the sketch image comprises obtaining an adversarial loss output from a discriminator of the neural network (the neural network structure of the models further comprises a generative adversarial neural network GAN and the loss is calculated between the style transfer input style image and input content image to the generated style transfer result and these losses are used to train and update the GAN; paragraphs [0225], [0235-0237]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify CHANDRAN to have an adversarial loss output from a discriminator of the neural network of ZHANG reference. The Suggestion/motivation for doing so would have been to provide a way to decrease the loss in content and style during image generation with a purpose to generate an image X (that is, style transfer target image) having a style similar to Xs and the same content as Xc, that is, the content image Xc is transferred to an image X with the same style as Xs as suggested by ZHANG paragraph [0235]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine ZHANG with CHANDRAN to obtain the invention as specified in claim 10.
As per claim 14, CHANDRAN discloses the device of claim 12. CHANDRAN fails to disclose wherein the inputting the reference image comprising the style of the sketch by the second attention-based convolutional layer comprises inputting a reverse image or a rotated image of the reference image.
ZHANG discloses wherein the inputting the reference image comprising the style of the sketch by the second attention-based convolutional layer comprises inputting a reverse image or a rotated image of the reference image (a system for style transfer of an input image region to a style transfer target image X using a stylistic reference image and a content image the region of interest for the style transfer is found and rotated to its optimal rotation angle the image region is fused (input) with the image region in the original image using the style/segmentation models and the models comprises convolutional layers to perform this segmentation and rotation/correction process; abstract; figs 11-12 and 14A-15; paragraphs [0079-0083], [0118], [0170-0175], [0184], [0219-0233]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify CHANDRAN to have inputting a reverse image or a rotated image of the reference image of ZHANG reference. The Suggestion/motivation for doing so would have been to as stated by ZHANG at paragraph [0219] that in order to make the model better apply to the terminal devices, using the object segmentation model, the third layer in the pyramid pooling module may be removed and after removing this layer, the multi-scale information on the PSPNET is also retained to ensure the segmentation accuracy, and simultaneously the network structure is reduced to make the model run faster on the terminal, so that it may run on more types of terminal devices including a mobile terminal device. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine ZHANG with CHANDRAN to obtain the invention as specified in claim 14.
Conclusion
The prior art made of record and not relied upon but is considered pertinent to applicant's disclosure and could have been used in an art rejection. These prior arts include the following:
US 2024/0419382 A1
US 2024/0169623 A1
US 2024/0135737 A1
US 2021/0256304 A1
US 2021/0012181 A1
Bionic Face Sketch Generator - NPL
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVIN JACOB DHOOGE whose telephone number is (571) 270-0999. The examiner can normally be reached 7:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800- 786-9199 (IN USA OR CANADA) or 571-272-1000.
/Devin Dhooge/
USPTO Patent Examiner
Art Unit 2677
/ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677