DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 07/05/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
/
Claim(s) 1, 11 and 18 is/are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Bell et al. (US 20140225888 A1) hereinafter referred to as Bell.Claim 1. A computer-implemented method for generating a scaled three-dimensional reconstruction of an object (Bell, Abstract), comprising:
receiving a digital input including a calibration target and the object (Bell, [0012] the three-dimensional model is the calibration target. Garments are being adjusted through a virtual try-on on the three-dimensional model) and an object; (Bell, [0050] and [0051]);
defining a three-dimensional coordinate system representing a three-dimensional space for scaling the object using the calibration target; (Bell, [0130] and [0131] cylindrical coordinates)
positioning the calibration target within the three-dimensional coordinate system based on the digital input; (Bell, [0130] and [0131], fig.3, fig.4 and fig.14)
aligning the object to the calibration target within the three-dimensional coordinate system based on the digital input; (Bell, [0129] and fig.16 garment is being positioned around the 3D model)
determining a scaling factor between the calibration target and the object based on measurements of the calibration target in the three-dimensional coordinate system; (Bell, [0047], [0050]) and
generating the scaled three-dimensional reconstruction of the object based on the determined scaling factor. (Bell, [0057][0135])Regarding claims 11 and 18, they essentially recites the same limitations as claim 1. Therefore, the rejection of claim 1 in applied to claims 11 (Bell, system) and 18 (Bell, [0145] storage medium).
Allowable Subject Matter
Claims 2-10, 12-17, 19 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Claim 2, no prior art discloses alone or in combination the features “The computer-implemented method of claim 1, wherein receiving the digital input comprises: capturing, via one or more sensors, a plurality of images of the calibration target of known size and the object of unknown size; and uploading the plurality of images to a processing unit, wherein the processing unit identifies and isolates the calibration target and the object within the plurality of images.Claim 3 depends on allowable claim 2 and is therefore allowable for the same reasons as claim 2.
Claim 4, no prior art discloses alone or in combination the features “The computer-implemented method of claim 1, wherein positioning the calibration target within the three-dimensional coordinate system comprises: identifying one or more points on the calibration target; aligning the one or more points with corresponding reference points in the three-dimensional coordinate system; and verifying the alignment through iterative refinement to minimize spatial discrepancies between the one or more points and their corresponding reference points in the three-dimensional space.Claims 5 and 6 depend on allowable claim 4 and are therefore allowable for the same reasons as claim 4.Claim 7, no prior art discloses alone or in combination the features “The computer-implemented method of claim 1, further comprising: generating a bounding box around an expected location of the calibration target, wherein dimensions of the bounding box is based on known dimensions of the calibration target.Claim 8 depend on allowable claim 7 and is therefore allowable for the same reasons as claim 7.
Claim 9, no prior art discloses alone or in combination the features “The computer-implemented method of claim 1, further comprising: applying a machine learning algorithms for automated detection of the calibration target and the object in the digital input.Claim 10, no prior art discloses alone or in combination the features “The computer-implemented method of claim 1, wherein one or more geometrical parameters of the calibration target is generated based on a geometric constraint, comprising: detecting vertical lines corresponding to vertical edges of the calibration target; detecting horizontal lines corresponding to top and bottom edges of the calibration target; and aligning the detected vertical and horizontal lines to their expected positioning.Claim 12, no prior art discloses alone or in combination the features “The system of claim 11, wherein receiving the digital input comprises: capturing, via one or more sensors, a plurality of images of the calibration target of known size and the object of unknown size; and uploading the plurality of images to a processing unit, wherein the processing unit identifies and isolates the calibration target and the object within the plurality of images.Claim 13 depends on allowable claim 12 and is therefore allowable for the same reasons as claim 12.Claim 14, no prior art discloses alone or in combination the features “The system of claim 11, wherein positioning the calibration target within the three-dimensional coordinate system comprises: identifying one or more points on the calibration target; aligning the one or more points with corresponding reference points in the three-dimensional coordinate system; and verifying the alignment through iterative refinement to minimize spatial discrepancies between the one or more points and their corresponding reference points in the three-dimensional space.Claims 15 and 16 depends on allowable claim 14 and is therefore allowable for the same reasons as claim 14.Claim 17, no prior art discloses alone or in combination the features “The system of claim 11, further comprising: generating a bounding box around an expected location of the calibration target, wherein dimensions of the bounding box is based on known dimensions of the calibration target.Claim 19, no prior art discloses alone or in combination the features “The non-transitory computer readable medium of claim 18, wherein receiving the digital input comprises: capturing, via one or more sensors, a plurality of images of the calibration target of known size and the object of unknown size; and uploading the plurality of images to a processing unit, wherein the processing unit identifies and isolates the calibration target and the object within the plurality of images.Claim 20 depends on allowable claim 19 and is therefore allowable for the same reasons as claim 19.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure is as follows:US 20170140574 A1 According to one embodiment, an image processing device includes at least one processor. The at least one processor is configured to acquire a first three-dimensional model regarding a subject, set a plurality of first control points on the first three-dimensional model, acquire mesh data of a meshed image of a region of clothing extracted from a captured image, acquire a second three-dimensional model, modify the mesh data based on an amount of movement from each of the plurality of first control points, to each respective one of a plurality of second control points, and generate an image of the clothing using the captured image and the modified mesh data.US 20140225888 A1 A method of creating a three dimensional model of a unique body is provided, said method comprising obtaining a three dimensional model of a standard body and obtaining a two dimensional image of the unique body that is to be modelled. The method further comprises determining a location of the unique body in said image and determining a position of the unique body in said image, using the determined location and position data to extract a two-dimensional outline of the unique body from the two-dimensional image. The method further comprises selecting a measurement for which a value is to be calculated for the unique body, calculating a value of said selected measurement from the extracted two-dimensional outline, using said calculated value of the selected measurement to update a corresponding measurement on the three-dimensional model of a standard body, and outputting the up dated three dimensional model of a standard body as a three dimensional model of the unique body.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARTIN MUSHAMBO whose telephone number is (571)270-3390. The examiner can normally be reached Monday-Friday (8:00AM-5:00PM).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARTIN MUSHAMBO/Primary Examiner, Art Unit 2615 03/07/2026