DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 18/402586 has been entered.
Response to Arguments
Applicant’s arguments filed 2/5/2026, with respect to claims 1-20 have been fully considered but are moot in view new ground(s) of rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-6, 9-16, 19 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Donnelly (PGPUB Document No. US 2022/0253923) in view of Kite-Powell (“Take A Look At How Chewy Is Using Augmented Reality” URL: https://www.forbes.com/sites/jenniferhicks/2021/10/01/take-a-look-at-how-chewy-is-using-augmented-realty/) in view of Kuper et al. (PGPUB Document No. US 2018/0350010) in view of Adeyoola et al. (PGPUB Document No. US 2014/0176565).
Regarding claim 11, Donnelly teaches a system comprising:
One or more memories collectively storing computer-executable instructions;
And one or more processors configured to collectively execute the computer-executable instructions and cause the system to:
Receive visual data of a person (information of the body that is scanned by the scanner (Donnelly: 0036));
Extract features of the person by analyzing the visual data (obtained cloud of data points (Donnelly: 0036));
Generate a first mapping for the person based on the extracted features using one or more computer vision algorithms (the resulting body structure after the scan (Donnelly: 0038));
Collect measurement data of a person -related item selected by a user (the Examiner submits the known size of garments (Donnelly: 0036, 0040, 0032) implies the garments are measured at some point);
Generate a second mapping for the person-related item based on the measurement data (size of garment (Donnelly: 0036, 0040, 0032));
Create a composite mapping by combining the first, second, and third mappings (the visualized result of the virtual fitting (Donnelly: 0119, step 722 of FIG.4));
Generating a composite mapping by overlaying the second mapping on top of the first mapping (the resulting virtually “try-on” of a particular garment (Donnelly: 0067));
Generating a visual representation of the composite mapping (displaying the result of the virtual try-on based on the teachings of Donnelly above);
Causing the visual representation to display in an augmented reality environment (displaying the results of the virtual fitting (Donnelly: 0119, step 722 of FIG.4)).
However, Donnelly does not expressly teach but Kite-Powell suggests applying the teachings above to pets (virtually visualizing Halloween costumes on pets (Kite-Powell: para 5)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to apply the teachings above to pets as suggested by Kite-Powell, because this enables an added variety of subjects to experience the virtual fitting room of Donnelly.
Further, the combined teachings above do not expressly teach but Kuper teaches, predicting, using a machine learning (ML) model. anticipated growth of the pet over a defined period of time based on the extracted features (adaptive livestock growth models 150 to determine and predict a livestock growth rate 172 over time, utilizing artificial intelligence (Kuper: 0015)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the combined teachings above such as to utilize the growth prediction teaching of Kuper, because this provides additional information to the user when purchasing the garment.
The combined teachings above now teach
A first mapping associated with determining the size of the pet
A second mapping associated with determining the size of garment that fits said pet
However, the combined teachings above do not expressly teach
Determining that a dimensional difference between the extracted features of the pet in the first mapping and the measurement data of the pet-related item in the second mapping is present in the visual representation of the composite mapping.
Adeyoola teaches the need for garment adjustment to simulate the growth of a child (Adeyoola: 0513). Applying the teachings of Adeyoola to the combined teachings above results in teaching,
Determining that a dimensional difference between the extracted features of the pet in the first mapping and the measurement data of the pet-related item in the second mapping is present in the visual representation of the composite mapping (the need for manually adjust according to Adeyoola correspond to “determining the dimensional difference,” wherein said adjustment can be made automatically using the body scanning teaching of Donnelly (Donnelly: 0036, 0038), and the fit analysis of garments (Donnelly: 0067)).
Responsive to determining the dimensional difference is present in the visual representation:
Adjusting the extracted features of the pet and the measurement data of the pet-related item in the second mapping (repeating the body scanning and fit analysis of Donnelly (Donnelly: 0036, 0038, 0067)),
Integrating the adjusted extracted features and adjusted measurement data into the composite mapping (the resulting garment that is adjusted to fit the simulated growth of the pet),
And generating a second visual representation of the composite mapping comprising the integrated adjusted extracted features and measurement data (the resulting adjusted garment);
And causing the second visual representation of the composite mapping to display in the augmented reality environment (the displayed virtual try-on (Donnelly: 0119, step 722 of FIG.4)).
Regarding claim 12, the combined teachings teach the system of claim 11, wherein the visual data comprises a least one of a video or an image of the pet (infrared image (Donnelly: 0019)).
Regarding claim 13, the combined teachings teach the system of claim 11, wherein the composite mapping comprises at least one of (i) a two-dimensional representation (2D (Donnelly: 0089)) or (ii) a three-dimensional representation that depicts interactions of the pet with the pet-related item.
Regarding claim 14, the combined teachings teach the system of claim 11, wherein, to create the composite mapping, the one or more processors configured to collectively execute the computer-executable instructions and cause the system to:
Identify the one or more dimensional differences between the pet in the first mapping and the pet-related item in the second mapping based on the third mapping (the fit analysis determining the right size/fit that requires knowing the relative size of the garment to the body/pet);
Adjust sizes of at least one of (i) the pet in the first mapping (adjusting the body (Donnelly: 0051)), or (ii) the pet-related item in the second mapping;
And integrate the adjusted sizes into the composite mapping to provide a scaled representation (the visualized result of the virtual fitting (Donnelly: 0119, step 722 of FIG.4)).
Regarding claim 15, the combined teachings teach the system of claim 11, wherein the one or more processors configured to collectively execute the computer-executable instructions and cause the system to further generate a request to facilitate a purchase decision for the pet-related item (the virtual fitting process of previewing how a garment fits facilitates the purchase decision of the garment (Donnelly: 0119). Also, the fitting process facilitates with shopping apps (Donnelly: 0032)).
Regarding claim 16, the combined teachings teach the system of claim 11, wherein the features of the pet comprises at least one of (i) physical attributes of the pet (body scan of the model (Donnelly: 0019)); (ii) behavioral patterns of the pet; or (iii) anatomical movements of the pet.
Regarding claim 19, the combined teachings above teach the system of claim 11, wherein the anticipated growth comprises one or more prediction changes in body dimensions, shape or behavioral tendencies of the pet (predicting livestock growth rate (Kuper: 0015)).
Claim(s) 1-6 and 9 are corresponding method claim(s) of claim(s) 11-16 and 19 The limitations of claim(s) 1-6 and 9 are substantially similar to the limitations of claim(s) 11-16 and 19. Therefore, it has been analyzed and rejected substantially similar to claim(s) 1-6 and 9.
Regarding claim 10, the combined teachings teach teaches the method of claim 1, further comprising:
Receiving textual data of the pet, and extracting the features of the pet by analyzing the textual data, wherein the textual data comprises at least one of (i) descriptive statements about the pet (user entering measurements (Donnelly: 0100)); or (ii) one or more metrics related to the pet.
Claim(s) 20 is a corresponding computer program product (virtual fitting room software, Donnelly: 0018) claim(s) of claim(s) 11. The limitations of claim(s) 20 are substantially similar to the limitations of claim(s) 11. Therefore, it has been analyzed and rejected substantially similar to claim(s) 20.
Claim(s) 7 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Donnelly in view of Kite-Powell in view of Kuper in view of Adeyoola as applied to the claim(s) above, and further in view of Lee et al. (PGPUB Document No. US 2021/0049817).
Regarding claim 17, the combined teachings do not expressly teach but Lee teaches the system of claim 11, wherein the one or more processors configured to collectively execute the computer-executable instructions and cause the system to further: receive feedback from the user when the composite mapping is being displayed; and dynamically adjust the composite mapping based on the feedback (Lee teaches the concept of allowing the user to make adjustments to the virtual clothes (Lee: 0514). The virtual clothes reflecting the change corresponds to dynamically adjusting as presently claimed).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the combined teachings above such as to enable the user make adjustments as suggested by Lee, because this enables the user further customization when experiencing the virtual fitting room.
Claim 7 is similar in scope to claim 17.
Claim(s) 8 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Donnelly in view of Kite-Powell in view of Kuper in view of Adeyoola as applied to the claim(s) above, and further in view of Dhana et al. (US Patent No. 11961198).
Regarding claim 18, the combined teachings do not expressly teach but Dhana teaches the system of claim 11, wherein a convolutional neural network is trained to process the visual data to extract the features (Dhana: claim 28).
The combined teachings above contain a “base” process of utilizing 3D body scans, which the claimed invention can be seen as an “improvement” in that the scanning process is carried out by the use of a convolutional neural network.
Dhana teaches a known technique of utilizing a convolutional neural network that is applicable to the “base” process.
Dhana’s known technique would have been recognized by one skill in the art as applicable to the “base” process of the combined teachings above and the results would have been predictable and resulted in effectively scanning, which results in an improved process.
Therefore, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Claim 8 is similar in scope to claim 18.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to David H Chu whose telephone number is (571)272-8079. The examiner can normally be reached M-F: 9:30 - 1:30pm, 3:30-8:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel F Hajnik can be reached at (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID H CHU/Primary Examiner, Art Unit 2616