DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This Office Action is responsive to remarks filed on 01/27/2026. Claims 1-20 remain pending in the application. Herein, Applicant has amended claims 13 and 19. Applicant submits that no new matter has been added by way of this amendment.
Response to Arguments
Applicant’s arguments, see pages 7 and 8, filed 01/27/2026, with respect to the rejection(s) of claim 10 under 35 U.S.C. 102 have been fully considered. The rejection of claim 10 has been withdrawn. However, upon further consideration, a new ground of rejection is made in view of Babenko (US 20170293800 A1) in combination with Asikainen (US 20210008413 A1) as detailed below.
Applicant’s arguments, see pages 8-10, filed 01/27/2026, with respect to prior art rejections have been fully considered and are persuasive. The rejections of claim 13 under 35 U.S.C. § 102 has been withdrawn. Upon further consideration, a new ground of rejection is made in view of Babenko (US 20170293800 A1) as detailed below.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 13, 14 and 20 are rejected under 35 U.S.C. 102 (a)(1) and/or (a)(2) as being anticipated by Babenko (US 20170293800 A1).
Regarding Claim 13: (Currently Amended) Babenko discloses a non-transitory tangible computer-readable medium storing executable code (Refer to para [020 and 095]; “FIG. 11 is a block diagram illustrating components of an example machine able to read instructions from a machine-readable medium and execute them in a processor or controller.”) comprising:
code to cause a processor (Refer to para [095]; “FIG. 11 is a block diagram illustrating components of an example machine able to read instructions described as processes herein from a machine-readable medium and execute them in at least one processor (or controller).”) to detect an object region in an input image (Refer to para [036]; “Other embodiments of the feature extraction module 104 shown in FIG. 2 may use one or a combination of the following: (a) edge/corner detection methods, such as Harris Corner or Canny edge, which find edges or corners in the image to use as candidate features; (b) image gradients, which extract edge strength information; (c) oriented filters, which identify specific shapes; (d) thresholding methods, which use local or global threshold values to extract features; (e) image patch descriptors…”) having a first resolution (Refer to para [024]; “If the likelihood is below a threshold, such that the area of interest does not contain the object of interest (false positive) the machine learning model filters out features corresponding to the area of interest in images having the first resolution.”) and code to cause the processor (Refer to para [095]; “FIG. 11 is a block diagram illustrating components of an example machine able to read instructions described as processes herein from a machine-readable medium and execute them in at least one processor (or controller).”) to use a first machine learning model to increase a resolution of the object region (Refer to para [024]; “The system transmits the image to a machine learning model to identify an area of interest containing an object of interest in the image, such as a cylindrical container or tank with a floating roof structure.”) to produce an enhanced object region having a second resolution higher than the first resolution (Refer to para [024]; “The system receives a second image of the geographical area. The second image has a resolution higher than the first image, e.g., 50 cm per pixel. The system may transmit the second image to the machine learning model to determine a likelihood that the area of interest contains the object of interest. If the likelihood is below a threshold, such that the area of interest does not contain the object of interest (false positive) the machine learning model filters out features corresponding to the area of interest in images having the first resolution. If the likelihood exceeds the threshold, a visual representation identifying the object of interest is sent to a user device.”) wherein the first machine learning model is trained based on object identity features (Refer to para [028]; “The remote container analysis system 101 transmits the feature vector to the machine learning model 104 to identify objects of interest in the images as illustrated and described below with reference to FIG. 5. The container analysis module 107 may analyze a pattern related to the identified objects of interest in the images, for example, times of capture of images, counts of the images, filled volumes of the images.”).
Regarding Claim 14: (Original) Babenko discloses the object region includes detected text (Refer to para [062]; “The process may use the image analysis module 204, the feature extraction module 104, and the machine learning model 106. FIG. 4 and the other figures use like reference numerals to identify like elements. A letter after a reference numeral, such as “410a,” indicates that the text refers specifically to the element having that particular reference numeral. A reference numeral in the text without a following letter, such as “410,” refers to any or all of the elements in the figures bearing that reference numeral, e.g., “410” in the text refer to reference numerals “410a” and/or “410b” in the figures.”).
Regarding Claim 20: (New) Babenko discloses the first machine learning model includes a neural network (Refer to para [038]; “The feature extraction module 104 may use the geometric information to directly create the probabilistic heat map or in conjunction with other image processing operations, for example, as a line finding algorithm or random forest algorithm, or using machine learning methods such as Support Vector Machines (SVM), neural network, or convolutional neural network (CNN), which requires no feature extraction.”).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Babenko (US 20170293800 A1) in combination with Asikainen (US 20210008413 A1).
Regarding Claim 10: (Original) Babenko discloses an apparatus (Refer to para [023]; “Disclosed by way of example embodiments are systems, methods and/or computer program products (e.g., a non-transitory computer readable storage media that stores instructions executable by one or more processing units) for identifying remote objects, such as cylindrical containers or tanks with floating roof structures over large geographic regions (e.g., a country), and determining the filled volume of remote objects.”) comprising: a memory (Refer to para [057]; “FIG. 2 may store patterns received from the container analysis module 107. The container pattern store 206 may be organized as a database or table stored on one or more of removable or non-removable memory cards, tape cassettes, zip cassettes, and computer hard drives.”) and a processor coupled to the memory (Refer to para [095]; “FIG. 11 is a block diagram illustrating components of an example machine able to read instructions described as processes herein from a machine-readable medium and execute them in at least one processor (or controller).”), wherein the processor is to: execute a first machine learning model (Refer to para [041]; “The remote container analysis system 101 may train the machine learning model 106 using training sets and data from the feature store 202. In one example embodiment, the machine learning model 106 may receive training sets including labeled clusters of pixels corresponding to objects of interest, as illustrated and described below with reference to FIGS. 3A and 3B.”) to produce an enhanced face image based on a face image (Refer to para [034]; “The optional feature extraction module 104 may extract feature vectors from the images in the image store 102. The feature vector may include aggregate values based on pixel attributes of pixels in the images.”) wherein the first machine learning model is trained (Refer to para [031]; “a machine learning training engine 203,”) based on a landmark loss and an identity feature loss (Refer to para [033]; “In a second phase, higher resolution imagery, e.g., 50 cm/pixel may be used to train the machine learning model to filter out false alarms, as described below with reference to FIG. 4. The parameter extraction module 105, the optional feature extraction module 104, and the image analysis module 204 may retrieve images stored by the image store 102 for processing. The image store 102 may be organized as a database or table of images stored on one or more of removable or non-removable memory cards, tape cassettes, zip cassettes, and computer hard drives. In one embodiment, the image store 102 may include multiple data fields, each describing one or more attributes of the images. In one example, the image store 102 contains, for a single image, the time of capture, spectral band information, geographical area coordinates, etc.” Also refer to para [041]; “The remote container analysis system 101 may train the machine learning model 106 using training sets and data from the feature store 202. In one example embodiment, the machine learning model 106 may receive training sets including labeled clusters of pixels corresponding to objects of interest, as illustrated and described below with reference to FIGS. 3A and 3B.”) and wherein the enhanced face image has a second resolution that is greater than a first resolution of the face image (Refer to para [072]; “The remote container analysis system 101 receives 512 a second image of the geographical area. The second image has a second resolution higher than the first resolution. The processing of the low resolution first image is followed by a cleanup phase on the second image.”).
Babenko does not expressly disclose a face image.
Asikainen teaches a system for tracking physical activity of a user performing exercise movements.
More specifically, Asikainen teaches enhanced training images to provide enhanced images based on a facial image. Refer to para [070]; “Example training datasets 224 curated by the data processing engine 204 include, but not limited to, a dataset containing a sequence of images or video for a number of users engaged in physical activity synchronized with labeled time-series heart rate over a period of time, a dataset containing a sequence of images or video for a number of users engaged in physical activity synchronized with labeled time-series breathing rate over a period of time, a dataset containing a sequence of images or video for a number of repetitions relating to an labelled exercise movement (e.g., barbell squat) performed by a trainer, a dataset containing images for a number of labelled facial expressions (e.g., strained facial expression), a dataset containing images of a number of labelled equipment (e.g., dumbbell), a dataset containing images of a number of labelled poses (e.g., a downward phase of a squat barbell movement), etc.”
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Babenko by substituting a sequence of images as taught by Asikainen.
The suggestion/motivation for substituting the “images stored by the image store 102 for processing” of Babenko with the “dataset containing a sequence of images” as taught by Asikainen would have been in order to “collects each individual user performance for a number of users participating in a live class and provides a live view of the aggregate performance to a trainer situated remotely.” (at para [114], Asikainen).
Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Babenko and Asikainen in order to obtain the specified claimed elements of Claim 10. It is for at least the aforementioned reasons that the Examiner has reached a conclusion of obviousness with respect to the claim in question.
Allowable Subject Matter
Claims 1-9 are allowed.
Claims 11, 12, 15, 16-19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The prior art either singly or in combination does not teach, disclose or suggest at least the following claim limitation(s): “… segmenting an image into an object region and a background region, wherein the image has a first resolution; generating, using a first machine learning model, an enhanced object region with a second resolution that is greater than the first resolution, wherein the first machine learning model has been trained based on object landmarks; generating, using a second machine learning model, an enhanced background region with a third resolution that is greater than the first resolution; and combining the enhanced object region and the enhanced background region to produce an enhanced image.”
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MIA M THOMAS whose telephone number is (571)270-1583. The examiner can normally be reached M-Th 8:30am-4:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen (Steve) Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
MIA M. THOMAS
Primary Examiner
Art Unit 2665
/MIA M THOMAS/Primary Examiner
Art Unit 2665