DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
1: Claim(s) 1, 2, 9-11, 16, 17, 19, 20, 27-29, 34 and 35 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 2024/0135561 A1 Ding et al.
2: As for Clam 1, Claim 1 is rejected for reasons discussed related to Claim 19.
3: As for Clam 2, Claim 2 is rejected for reasons discussed related to Claim 20.
4: As for Clam 9, Claim 9 is rejected for reasons discussed related to Claim 27.
5: As for Clam 10, Claim 10 is rejected for reasons discussed related to Claim 28.
6: As for Clam 11, Claim 11 is rejected for reasons discussed related to Claim 29.
7: As for Clam 16, Claim 16 is rejected for reasons discussed related to Claim 34.
8: As for Clam 17, Claim 17 is rejected for reasons discussed related to Claim 35.
9: As for Clam 19, Ding et al teaches in Paragraphs [0079, 0209, 0232, 0259 and 0354] A system for enhancing an image captured by a user device, the system comprising: communications circuitry configured to access the user device; and control circuitry configured to: detect an input to initiate an image capture process for capturing a first image via a camera of the user device (an image is captured by a camera); determine one or more attributes of the first image (objects in am image are classified); identify one or more reference images that are characterized by the one or more attributes of the first image (Figure 25A depicts images are generated that represent the classified objects from the captured image); control the user device to display the one or more reference images (Figure 25A displays the reference images of objects) that are characterized by the one or more attributes of the first image; receive a selection of a first reference image (Figure 25D depicts a user selecting desired reference images of displayed objects), from the displayed one or more reference images (2508); and enhance the first image based on the selected first reference image.
10: As for Clam 20, Ding et al teaches in Paragraphs [0079, 0209, 0232, 0259 and 0354] and depicts in Figure 25A wherein identifying one or more reference images (images of objects depicted in Figure 25A) that are characterized by the one or more attributes of the first image (the objects captured in the image are classified) further comprises the control circuitry (neural network) configured to: identify a plurality of potential reference images (images associated with the classified objects from the captured image); calculate a combined score (combined confidence score) for each one of the identified plurality of potential reference images; and identify the one or more reference images, from the plurality of potential reference images, for displaying on the user device (2502), based on their combined score (Paragraph 0209 teaches a combined confidence score).
11: As for Clam 27, Ding et al teaches in Paragraphs [0107, 0526-0528] further comprising the control circuitry(neural network) configured to: determine that the first image depicts one or more individuals in a foreground (depicted in Figure 22B); and in response to the determination that the first image depicts one or more individuals in the foreground, determining a percentage of the first image occupied by the one or more individuals (the scaling ratio is determined based on the size and percentage of the image the foreground object occupies).
12: As for Clam 28, Ding et al teaches in Paragraphs [0205, 0151 and 0527] further comprising the control circuitry (neural network) configured to: determine that the percentage of the first image occupied by the one or more individuals exceeds a predetermined percentage threshold (viewed as the objects being classified as significant objects due to their threshold size being met) ; remove a portion of the first image that is occupied by the one or more individuals in response to determining that the percentage of the first image occupied by the one or more individuals exceeds the predetermined percentage threshold; (the foreground object can be removed when it is determined to be a significant object based on its size) and in-paint a background of the image, from which the portion of the first image that is occupied by the individuals is removed (Paragraph [0151]).
13: As for Clam 29, Ding et al teaches in Paragraphs [0155, 0278, 0386, 0739 and 0740] further comprising, the control circuitry (neural network) configured to apply a separate deep learning model to the background and the foreground of the first image (Ding et al teaches separating the foreground from the background and performing different image processing using the neural network on the foreground objects and the background separately), wherein the foreground includes only the removed portion of the first image that is occupied by the one or more individuals (the foreground can be an individual as depicted in Figure 22C).
14: As for Clam 34, Ding et al teaches in Paragraphs [0125 and 0215] further comprising the control circuitry configured to: apply a deep learning model to the first image to generate a vector representation of the first image (object feature vectors); and using the vector representation to obtain matching reference images (corresponding semantic entities). Ding et al teaches using a deep learning neural network to generate feature vector in the captured image and match the feature vectors to corresponding semantic entities in the external knowledgebase.
15: As for Clam 35, Ding et al teaches in Paragraphs [0125, 0215 and 0216] further comprising the control circuitry (neural network) configured to: calculate a vector representation of the one or more attributes (object feature vectors) of the first image; using the calculated vector representation (calculated feature vector) and one or more device parameters of the user device (the limitation is broad and does not define the device parameter. Furthermore, Ding et al teaches performing the matching using the object feature vectors in connection with their corresponding subgraph feature map (the subgraph feature map is viewed by the examiner as a device parameter) to identify the one or more reference images (corresponding semantic entities) that are characterized by the one or more attributes of the first image.
Allowable Subject Matter
Claims 3, 4, 8, 14, 21, 22, 26 and 32 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES M HANNETT whose telephone number is (571)272-7309. The examiner can normally be reached 8:00 AM-5:00 PM Monday thru Thursday.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Twyler Haskins can be reached at 571-272-7406 The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/JAMES M HANNETT/Primary Examiner, Art Unit 2639
JMH
October 28, 2025