DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of claims: claims 1-9 and 17-20 are examined below. Claims 10-16 are withdrawn from examination.
Election/Restrictions
Applicant's election with traverse of with election of Group 1, claims 1-9 and 17-20 in the reply filed on 12/10/2025 is acknowledged. The traversal is on the ground(s) that applicant respectfully submits the claims 10-16 are directed towards a same invention for generation of a saliency map, and a method to detect abnormal object. For example, the saliency map is used to improve the explainability of an image classification model. The method to detect abnormal object, prevents the abnormal object from being used to obtain the saliency map, thereby improving accuracy of the saliency map. This is not found persuasive because claim 10-16 is focus detecting abnormal object using feature extraction with loss value deviation distance between training sample and weight vector, classified in G06V 10/761 (Proximity, similarity or dissimilarity measures) which is different fields of search and classification from group 1.
The requirement is still deemed proper and is therefore made FINAL.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 10/25/2024 and 12/13/2024 was filed and considered. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-2, 7 and 17-18 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Perry et al (US 2016/0292836).
Claim 1:
Perry et al (US 2016/0292836) anticipated the following subject matter:
A method for generating a saliency map, comprising:
obtaining a plurality of objects, wherein the plurality of objects are obtained by performing disturbance processing on a first object (figures 11-12 with plurality of objects (person and house) processed with disturbance processing (figure 12 part 1202 contrast adjustment and part 1206 luminance adjust) on objects; figure 11 and 0179 where plurality of objects (person 1110 and house 1120) processed to generate salience map with importance of person 1110 (first object) due to applying more);
performing screening processing on the plurality of objects based on a first condition, to obtain a plurality of updated objects (figure 12 and paragraph 0105-0112 teaches first condition (contrast or luminance) to output update objects such as person and house), wherein the plurality of updated objects satisfy target data distribution (figure 12 and 0105-0112 teaches updated objects after applied condition, where 0111 and figure 12 further detail contrast adjustment for first region (1261 person)), the target data distribution is obtained based on a training sample, and the training sample is used to train a preset model to obtain a target model (0005 teaches use of machine learning (preset model) with feature extracted from original image (training sample) with prediction of presence/absence in transformed image (target data distribution from contrast/luminance adjustment taught above));
obtaining an input of the target model based on the plurality of updated objects (figure 13 and 0197 teaches person and house (updated objects) further processed with content masking map with content such as human being); and
generating a saliency map of the first object based on a first prediction result output by the target model and the plurality of updated objects (outputting human being (target and saliency map) from updated object from figure 11b, adjustment of contrast/luminance, figure 12 with further contrast adjustment, and with figure 13 with content masking for person/human (first object as the target)).
Claim 2:
The method according to claim 1, wherein the first condition is deleting a target object from the plurality of objects, a distance between a feature of the target object and a weight vector of the target model exceeds a preset threshold, and the feature of the target object is obtained by performing feature extraction on the target object by using the target model (paragraph 0187 teaches use of screen and attention away (removing) with image parts (target) due to weight preset threshold by modelled as a parameterised 2-D Gaussian function with mean (0.5 w, 0.5 h) and standard deviation (0.28 w, 0.26 h) where w and h are the width and the height of the target and its position bias map 541 (distance); 0100-0101 teaches feature extraction with the use of content masking maps from contrast and luminance adjustment, further modify of contrast of the transformed image to optimal adjustment as taught above).
Claim 7:
The method according to claim 1, wherein the target model is obtained by updating the preset model by using a first loss value, the first loss value is determined based on a deviation between a feature of the training sample and a weight vector of the preset model, and the feature of the training sample is obtained by performing feature extraction on the training sample by using the preset model (paragraph 0187 teaches use of screen and attention away (removing) with image parts (target) due to weight preset threshold by modelled as a parameterised 2-D Gaussian function with mean (0.5 w, 0.5 h) and standard deviation (0.28 w, 0.26 h) where w and h are the width and the height of the target and its position bias map 541).
Claim 17:
An electronic device, wherein the electronic device comprises:
at least one processor; and at least one memory coupled to the at least one processor to store program instructions, which when executed by the processor, cause the at least one processor to (0023-0024 teaches use of processor and memory):
obtain a plurality of objects, wherein the plurality of objects are obtained by performing disturbance processing on a first object (figures 11-12 with plurality of objects (person and house) processed with disturbance processing (figure 12 part 1202 contrast adjustment and part 1206 luminance adjust) on objects; figure 11 and 0179 where plurality of objects (person 1110 and house 1120) processed to generate salience map with importance of person 1110 (first object) due to applying more);
perform screening processing on the plurality of objects based on a first condition, to obtain a plurality of updated objects (figure 12 and paragraph 0105-0112 teaches first condition (contrast or luminance) to output update objects such as person and house), wherein the plurality of updated objects satisfy target data distribution (figure 12 and 0105-0112 teaches updated objects after applied condition, where 0111 and figure further detail contrast adjustment for first region (1261 person)), the target data distribution is obtained based on a training sample, and the training sample is used to train a preset model to obtain a target model (0005 teaches use of machine learning (preset model) with feature extracted from original image (training sample) with prediction of presence/absence in transformed image (target data distribution from contrast/luminance adjustment taught above));
obtain an input of the target model based on the plurality of updated objects (figure 13 and 0197 teaches person and house (updated objects) further processed with content masking map with content such as human being); and
generate a saliency map of the first object based on a first prediction result output by the target model and the plurality of updated objects (outputting human being (target and saliency map) from updated object from figure 11b, adjustment of contrast/luminance, figure 12 with further contrast adjustment, and with figure 13 with content masking for person/human (first object as the target)).
Claim 18:
The device according to claim 17, wherein the first condition is deleting a target object from the plurality of objects, a distance between a feature of the target object and a weight vector of the target model exceeds a preset threshold, and the feature of the target object is obtained by performing feature extraction on the target object by using the target model (paragraph 0187 teaches use of screen and attention away (removing) with image parts (target) due to weight preset threshold by modelled as a parameterised 2-D Gaussian function with mean (0.5 w, 0.5 h) and standard deviation (0.28 w, 0.26 h) where w and h are the width and the height of the target and its position bias map 541 (distance); 0100-0101 teaches feature extraction with the use of content masking maps from contrast and luminance adjustment, further modify of contrast of the transformed image to optimal adjustment as taught above).
Allowable Subject Matter
Claim 3, and its dependent claims 4-6, are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. At the time of examination unable to find teaching regarding “…wherein the feature of the target object is extracted by using a first feature extraction layer, the first feature extraction layer is any one of a plurality of feature extraction layers in the target model, the distance between the feature of the target object and the weight vector of the target model is a distance between the feature of the target object and a weight vector of a second feature extraction layer, and the second feature extraction layer is any one of the plurality of feature extraction layers.”
Claim 8 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. At the time of examination unable to find teaching regarding “…wherein the target model is obtained by updating the preset model by using the first loss value and a second loss value, the second loss value is determined based on a deviation between a target result and a real result of the training sample, the target result is determined based on a second prediction result and a preset function, the second prediction result is a prediction result of the preset model for the training sample, an input of the preset function is the second prediction result, an output of the preset function is the target result, and the output of the preset function is negatively correlated with the input of the preset function.”
Claim 9 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. At the time of examination unable to find teaching regarding “…setting weights of the plurality of updated objects to a first weight; and setting weights of a plurality of remaining objects to a second weight, wherein the plurality of remaining objects are objects other than the plurality of updated objects in the plurality of objects, and the first weight is greater than the second weight; and wherein the obtaining an input of the target model based on the plurality of updated objects comprises: obtaining a first result based on the first weight and the plurality of updated objects, wherein the first result is an input of the target model, the input of the target model further comprises a second result, and the second result is obtained based on the second weight and the plurality of remaining objects.”
Claim 19, and its dependent claim 20, are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. At the time of examination unable to find teaching regarding “…wherein a feature of a target object is extracted by using a first feature extraction layer, the first feature extraction layer is any one of a plurality of feature extraction layers in the target model, a distance between the feature of the target object and a weight vector of the target model is a distance between the feature of the target object and a weight vector of a second feature extraction layer, and the second feature extraction layer is any one of the plurality of feature extraction layers.”
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Jetley et al (US 2017/0308770) teaches END-TO-END SALIENCY MAPPING VIA PROBABILITY DISTRIBUTION PREDICTION - predicting saliency in an image and method of use of the prediction system are described. Attention maps for each of a set of training images are used to train the system. The training includes passing the training images though a neural network and optimizing an objective function over the training set which is based on a distance measure computed between a first probability distribution computed for a saliency map output by the neural network and a second probability distribution computed for the attention map for the respective training image (abstract).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TSUNG-YIN TSAI whose telephone number is (571)270-1671. The examiner can normally be reached 7am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached at (571) 272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TSUNG YIN TSAI/Primary Examiner, Art Unit 2656