Prosecution Insights
Last updated: April 19, 2026
Application No. 18/747,915

Colored Three-Dimensional Digital Model Generation

Non-Final OA §103§DP
Filed
Jun 19, 2024
Examiner
TSWEI, YU-JANG
Art Unit
2614
Tech Center
2600 — Communications
Assignee
EBAY KOREA CO., LTD.
OA Round
3 (Non-Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
376 granted / 447 resolved
+22.1% vs TC avg
Strong +17% interview lift
Without
With
+17.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
44 currently pending
Career history
491
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
66.4%
+26.4% vs TC avg
§102
5.6%
-34.4% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 447 resolved cases

Office Action

§103 §DP
ADETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to the Amendment filed on 1/2/2026. Claims 1-19, 22-23 are pending. Claims 1, 15 have been amended. Claims Claim 20, 21 has been cancelled. Claim 22, 23 are newly added. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1, 2, 3, 4, 8, 9, 10, 12, 13, 14, 15, 16, 17 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 2, 3, 4, 6, 8, 9, 11, 12, 14, 15, 2, 3 of app 17/863,625 (now is US patent US 12045944 B2) respectively. Although the claims at issue are not identical, they are not patentably distinct from each other because they both claim the same subject matters and limitations as explained below. Claim 1 is determined to be obvious in light of claim 1 of 17/863,625 (now is US patent US 12045944 B2) based on reasons below for having similar limitations. Instant application claims 1 17/863,625 claim 1 A method comprising: receiving, by a computing device, scanning data resulting from a scan by a scanner through packaging of an object contained within the packaging, the scanner located outside of the packaging, the scan performed by the scanner through the packaging of the object contained within the packaging; 1. A method implemented by at least one computing device, the method comprising: receiving, by the at least one computing device, scanning data resulting from a scan by a scanner of a physical object contained within packaging, the scanner disposed out of the packaging, the scan performed by the scanner through the packaging to the physical object disposed within the packaging; generating, by the computing device, an uncolored three-dimensional digital model of the object based on the scanning data received; generating, by the at least one computing device, an uncolored three-dimensional digital model of the physical object based on the received scanning data; receiving, by the computing device, a two-dimensional digital image depicting a second object, in which the second object is same or similar to the object contained within the packaging; generating, by the computing device, mapped feature data by detecting features of the second object and corresponding features of the object contained within the packaging; based on the mapped feature data, modifying, by the computing device, the uncolored three-dimensional digital model of the object; detecting, by the at least one computing device, features of a digital image that correspond to features of the uncolored three-dimensional digital model; generating, by the at least one computing device, color for the uncolored three-dimensional digital model based on the detecting without removing the physical object contained within the packaging from the packaging; and outputting, by the computing device, and outputting, by the at least one computing device, the uncolored three-dimensional digital model via a user interface without removing the object contained within the packaging from the packaging. the colored three-dimensional digital model via a user interface without removing the physical object contained within the packaging from the packaging. Although the claims at issue are not identical, they are not patentably distinct from each other. For example, Claim 1 of both 17/863,625 and Instant application (18,747/915) discloses “receiving, by a computing device, scanning data resulting from a scan by a scanner through packaging of an object contained within the packaging, the scanner located outside of the packaging, the scan performed by the scanner through the packaging of the object contained within the packaging;” and “generating, by the computing device, an uncolored three-dimensional digital model of the object based on the scanning data received;”. While Claim 1 of instant application (18,747/915) discloses additional limitation “ receiving, by the computing device, a two-dimensional digital image depicting a second object, in which the second object is same or similar to the object contained within the packaging; generating, by the computing device, mapped feature data by detecting features of the second object and corresponding features of the object contained within the packaging; based on the mapped feature data, modifying, by the computing device, the uncolored three-dimensional digital model of the object”. It is not identical as Claim 1 of 17/863,625, however, prior art Ito et al. (US 20170212661 A1) teaches in those limitations. Ito and Claim 1 of 17/863,625 are analogous since both of them are dealing with directed to generating and refining a three-dimensional representation of an object based on image data. Claim 1 of 17/863,625 provided a way of scanning techniques that employed by a scanning system that scans a physical object while disposed within packaging to form a three dimension digital model, then use the feature of modelling to align a viewing perspective with respect to the model with a viewing perspective of the object within the digital image, Ito provided a way of receiving two-dimensional images of an object, determining correspondence of features/landmarks between images, and using that correspondence to generate and update a three-dimensional model. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate Ito's feature/landmark correspondence mapping and model updating technique into the modified invention of Claim 1 of 17/863,625 such that, after generating the three-dimensional digital model from scan data, the system further uses a two-dimensional image of a same or similar object to generate mapped feature data and modify the generated three-dimensional model accordingly. Therefore, Claim 1 of 17/863,625 with Ito et al. (US 20170212661 A1) discloses all limitations of instant application (18,747/915) claim 1. Claim 2 is determined to be obvious in light of claim 2 of 17/863,625 (now is US patent US 12045944 B2) based on reasons below for having similar limitations. Instant application claims 2 17/863,625 claim 2 2. The method as described in claim 1, wherein the scan is an X-ray scan. 2. The method as described in claim 1, wherein the scan is an X-ray scan. Claim 3 is determined to be obvious in light of Claim 3 of 17/863,625 (now is US patent US 12045944 B2) based on reasons below for having similar limitations. Instant application Claim 3 17/863,625 Claim 3 3. The method as described in claim 1, wherein the uncolored three-dimensional digital model represents the object without the packaging. 3. The method as described in claim 1, wherein the colored three-dimensional digital model represents the physical object without the packaging. Claim 4 is determined to be obvious in light of Claim 4 of 17/863,625 (now is US patent US 12045944 B2) based on reasons below for having similar limitations. Instant application Claim 4 17/863,625 Claim 4 4. The method as described in claim 1, wherein the object has a density that is greater than a density of the packaging. 4. The method as described in claim 1, wherein the physical object has a density that is greater than a density of the packaging. Claim 8 is determined to be obvious in light of Claim 6 of 17/863,625 (now is US patent US 12045944 B2) based on reasons below for having similar limitations. Instant application Claim 8 17/863,625 Claim 6 8. The method as described in claim 7, wherein the generating of the color for the at least one portion that is not colored is based on detecting a different portion of the initially colored three-dimensional digital model as corresponding to the at least one portion. 6. The method as described in claim 5, wherein the generating of the color for the at least one portion that is not colored is based on detecting a different portion of the initially colored three-dimensional digital model as corresponding to the at least one portion. Claim 9 is determined to be obvious in light of Claim 8 of 17/863,625 (now is US patent US 12045944 B2) based on reasons below for having similar limitations. Instant application Claim 9 17/863,625 Claim 8 9. The method as described in claim 7, wherein the generating of the color for the at least one portion that is not colored is based on stretching color from another portion of the initially colored three-dimensional digital model as supplying the color for the at least one portion. 8. The method as described in claim 5, wherein the generating of the color for the at least one portion that is not colored is based on stretching color from another portion of the initially colored three-dimensional digital model as supplying the color for the at least one portion. Claim 10 is determined to be obvious in light of Claim 9 of 17/863,625 (now is US patent US 12045944 B2) based on reasons below for having similar limitations. Instant application Claim 10 17/863,625 Claim 9 10. The method as described in claim 6, wherein the generating of the color includes linear blend skinning of the digital image to the uncolored three-dimensional digital model. 9. The method as described in claim 1, wherein the generating includes linear blend skinning of the digital image to the uncolored three-dimensional digital model. Claim 12 is determined to be obvious in light of Claim 11 of 17/863,625 (now is US patent US 12045944 B2) based on reasons below for having similar limitations. Instant application Claim 12 17/863,625 Claim 11 12. The method as described in claim 1, further comprising generating, by the computing device, a search query based on the uncolored three-dimensional digital model. 11. The method as described in claim 1, further comprising generating a search query based on the colored three-dimensional digital model and wherein the digital image is a search result resulting from a search performed based on the search query. Claim 13 is determined to be obvious in light of Claim 12 of 17/863,625 (now is US patent US 12045944 B2) based on reasons below for having similar limitations. Instant application Claim 13 17/863,625 Claim 12 13. The method as described in claim 6, further comprising generating digital content as including the colored three-dimensional digital model. 12. The method as described in claim 1, further comprising generating digital content as including the colored three-dimensional digital model and functionality that is user selectable to initiate a purchase of the physical object from a user. Claim 14 is determined to be obvious in light of Claim 14 of 17/863,625 (now is US patent US 12045944 B2) based on reasons below for having similar limitations. Instant application Claim 14 17/863,625 Claim 14 14. The method as described in claim 13, wherein the digital content is a webpage. 14. The method as described in claim 1, further comprising generating digital content as a webpage that includes the colored three-dimensional digital model. Claim 15 is determined to be obvious in light of Claim 15 of 17/863,625 (now is US patent US 12045944 B2) based on reasons below for having similar limitations. Instant application Claim 15 17/863,625 Claim 15 15. A system comprising: a memory component; and a processing device coupled to the memory component, the processing device to perform operations comprising: receiving scanning data resulting from a scan by a scanner through packaging of an object contained within the packaging, the scanner located outside of the packaging, the scan performed by the scanner through the packaging of the object contained within the packaging; generating an uncolored three-dimensional digital model of the object based on the scanning data received; 15. A system comprising: a three-dimensional scanning device disposed out of packaging and a physical object disposed within the packaging, the three-dimensional scanning device configured to generate an uncolored three-dimensional digital model by scanning the physical object disposed within the packaging, the scanning of the physical object performed through the packaging; and a model coloring system implemented at least partially in hardware of a computing device to color the uncolored three-dimensional digital model of the physical object using a two-dimensional digital image of the physical object without removing the physical object disposed within the packaging from the packaging. receiving a two-dimensional digital image depicting a second object, in which the second object is same or similar to the object contained within the packaging; generating mapped feature data by detecting features of the second object and corresponding features of the object contained within the packaging; based on the mapped feature data, modifying the uncolored threedimensional digital model of the object; and outputting the uncolored three-dimensional digital model via a user interface without removing the object contained within the packaging from the packaging. Although the claims at issue are not identical, they are not patentably distinct from each other. With the same reason specified in Claim 1. Claim 15 of 17/863,625 with Ito et al. (US 20170212661 A1) discloses all limitations of instant application (18,747/915) claim 15. Claim 16 is determined to be obvious in light of Claim 2 of 17/863,625 (now is US patent US 12045944 B2) based on reasons below for having similar limitations. Instant application Claim 16 17/863,625 Claim 2 16. The system as described in claim 15, wherein the scan is an X-ray scan. 2. The method as described in claim 1, wherein the scan is an X-ray scan. Claim 17 is determined to be obvious in light of Claim 3 of 17/863,625 (now is US patent US 12045944 B2) based on reasons below for having similar limitations. Instant application Claim 17 17/863,625 Claim 3 17. The system as described in claim 15, wherein the uncolored three-dimensional digital model represents the object without the packaging. 3. The method as described in claim 1, wherein the colored three-dimensional digital model represents the physical object without the packaging. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-7, 13-19, 22-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Graybill et al. (US 9686481 B1, hereinafter Graybill) in view of Ikits (US 9443346 B2), further in view of Ito et al. (US 20170212661 A1, hereinafter Ito). Regarding Claim 15, Graybill teaches a system (Graybill, Fig. 2, Element 200 system) comprising: a memory component (Graybill, Fig. 2, Element 246 Memory); and a processing device coupled to the memory component (Graybill, Fig.2 Element 244 Processor is coupled to Element 244 Meory), the processing device to perform operations comprising: receiving scanning data resulting from a scan by a scanner through packaging of an object contained within the packaging (Graybill, Column 3, Line 1-15, The fulfillment center 110 includes an X-ray imaging scanner 120 and a sensor 126…such images may be subjected to one or more radiographic analyses to identify the contents of the container 10, including the items actually included within the container 10), the scanner located outside of the packaging (Graybill, Fig. 1A, Scanner 120 is located outside the Element 10 Container), the scan performed by the scanner through the packaging of the object contained within the packaging (Graybill, Column 12, Line 31-35, “the contents of a sealed container may be evaluated by capturing one or more X-ray images of the container, and performing one or more radiographic analyses on the captured images to determine the amount, share or portion of the contents thereof which include items, dunnage and/or air, as well as the condition of such contents; Column 8, Line 50-51, operatively or functionally joined with the computer 210, the X-ray scanner 220); generating an [[ uncolored ]] three-dimensional digital model of the object based on the scanning data received (Graybill, Column 4, Line 40-45, X-ray computed tomographic techniques may now use computers to generate “slices,” or parallel images of portions of a three-dimensional scanned object); receiving a two-dimensional digital image depicting a second object (Graybill, Column 5, Line 32-35, “outlines of objects <read on first and second object> may be identified in a digital X-ray image according to any number of visual analyses, algorithms or machine-learning tools”), [[ in which the second object is same or similar to the object contained within the packaging; ]] [[ generating mapped feature data by ]] detecting features of the second object [[ and corresponding features of the object ]] contained within the packaging; (Graybill, Column 5, Line 25-28, “textures of features or objects expressed in a digital X-ray image may be identified using one or more computer-based visual analyses”) [[ based on the mapped feature data, modifying the uncolored three dimensional digital model of the object; ]] and outputting the [[ uncolored ]] three-dimensional digital model via a user interface without removing the object contained within the packaging from the packaging (Graybill, Column 3, Line 22-25, “The user interface 126 also includes information regarding the container 10 and its contents, which may be obtained through one or more analyses of the image 120A”; Abstract, “X-ray imaging, may be used to identify information regarding the contents of a container or other sealed object without having to open the container or the sealed object”). But, Graybill does not explicitly the image is uncolored and in which the second object is same or similar to the object contained within the packaging; generating mapped feature data [[ by detecting features of the second object and ]] corresponding features [[ of the object contained within the packaging;]] based on the mapped feature data [[ , modifying the uncolored threedimensional digital model of the object ]]. However, Ikits teaches generating an uncolored three-dimensional digital model of the object based on the scanning data received (Ikits, Column 2, Line 53-57, “retrieving a three-dimensional bone model corresponding to a portion of the anatomy stored in the memory, associating the three-dimensional bone model with the three-dimensional image data such that the three-dimensional bone model” Column 4, Line 26-30, 65-67, System 100 may receive such information from a user via an input/output (I/O) interface…The systems and methods described herein may generally create interactive high-quality virtual radiographs, also referred to herein as x-ray images; Column 5, Line 34-39, “color module 116 may be configured to provide grayscale color settings <read on uncolored>). [[ in which the second object is same or similar to the object contained within the packaging; generating mapped feature data by detecting features of the second object and corresponding features of the object contained within the packaging; based on the mapped feature data, modifying the uncolored three dimensional digital model of the object.]] and outputting the uncolored three-dimensional digital model (Ikits, Column 5, Line 34-39, “Attenuation model module 114 may generally be configured to indicate such a feature in an x-ray image output by system 100, and color module 116 may be configured to provide grayscale color settings for the image for display”; it is noted by adjusting the grayscale color can set to uncolored image). Ikits and Graybill are analogous since both of them are dealing with processing and visualizing x-ray based scan data to enable user understanding of internal object structure. Graybill provided a way of capturing X-ray scan data through scaled packging and presenting the resulting information via a user interface and/or augmented reality device to inform users about container contents. Ikits provided a way of producing and modulating grayscale x-ray images derived from 3D imaging data, including attenuation models and grayscale control modules for emphasizing key anatomical features. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the grayscale modulation techniques taught by Ikits into modified invention of Graybill to enable dynamic adjustment of image contrast and coloration when presenting scan-derived digital models. The motivation for doing so is to enhance visual clarity and user focus during review of internal object structures in scanned containers, particularly when rendering 3D model data that may include overlapping densities or mixed materials. Adjusting grayscale intensity can reduce visual distraction and improve interpretation of x-ray based imagery across different rendering platforms. But the combination does not explicitly disclose in which the second object is same or similar to the object contained within the packaging; generating mapped feature data [[ by detecting features of the second object and ]] corresponding features [[ of the object contained within the packaging;]] based on the mapped feature data [[ , modifying the uncolored threedimensional digital model of the object ]]. However, Ito teaches receiving a two-dimensional digital image depicting a second object, in which the second object is same or similar to the object contained within the packaging (Ito, Paragraph [0018], “Techniques and systems are described to generate three-dimensional models from two-dimensional images… For example, a user first provides a plurality of images of an object. The user then specifies which views of the object. The user then specifies which views of the object are captured by respective ones of the images, e.g., front, back, side, top, bottom, and so forth”); generating mapped feature data by detecting features of the second object and corresponding features of the object contained within the packaging (Ito, Paragraph [0019], “The images are then displayed in a user interface such that a user may indicate correspondence of landmarks between the images… For instance, a user may first indicate a point… which is referred to as a user-specified point. The computing device then estimates an estimated point in a second one of the images… This process is then repeated by interacting with the different images to indicate correspondence between points”; [0042], “estimate a plurality of estimated points on the object … with each of the estimated points corresponding to a respective user-specified point”); based on the mapped feature data, modifying the [[ uncolored ]] three dimensional digital model of the object (Ito, Paragraph [0043], “mesh of the three-dimensional model of the object is then generated as a mapping of respective ones of the user-specified points to respective ones of the estimated points in the plurality of images” [0044], “The user interface 116 also includes a real time display 610 of the three dimensional model 114 … and is further updated based on subsequent movement of either ones of the inputs”); Ito and Graybill are analogous since both are directed to generating and refining a three-dimensional representation of an object based on image data. Graybill provides a way of scanning an object through packaging and generating a three-dimensional digital model of the object without removing it from the packaging. Ito provides a way of receiving two-dimensional images of an object, determining correspondence of features/landmarks between images, and using that correspondence to generate and update a three-dimensional model. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate Ito's feature/landmark correspondence mapping and model updating technique into the modified invention of Graybill such that, after generating the three-dimensional digital model from scan data, the system further uses a two-dimensional image of a same or similar object to generate mapped feature data and modify the generated three-dimensional model accordingly. The motivation for doing so is to improve the accuracy and refinement of the generated model using known feature correspondence techniques. Regarding Claim 1, it recites limitations similar in scope to the limitations of Claim 15 but as a method and the combination of Graybill, Ikits and Ito teaches all the limitations as of Claim 15. Therefore is rejected under the same rationale. Regarding Claim 2, the combination of Graybill, Ikits and Ito teaches the invention in Claim 1. The combination further teaches wherein the scan is an X-ray scan (Graybill, Column 3, Line 3-5, “the X-ray scanner 120 is configured to capture one or more X-ray images of the container 10”) . Regarding Claim 3, the combination of Graybill, Ikits and Ito teaches the invention in Claim 1. The combination further teaches wherein the [[ uncolored ]] three-dimensional digital model represents the object without the packaging (Graybill, Column 4, Line 40-45, X-ray computed tomographic techniques may now use computers to generate “slices,” or parallel images of portions of a three-dimensional scanned object). But, Graybill does not explicitly the image is uncolored. However, Ikits teaches generating an uncolored three-dimensional digital model of the object (Ikits, Column 5, Line 34-39, “color module 116 may be configured to provide grayscale color settings; it is noted by adjusting the grayscale color can set to uncolored image). As explained in rejection of claim 1, the obviousness for combining of uncovered image of Ikits into Graybill is provided above. Regarding Claim 4, the combination of Graybill, Ikits and Ito teaches the invention in Claim 1. The combination further teaches wherein the object has a density that is greater than a density of the packaging (Graybill, Column 13, Line 40-52, the colors (e.g., the brightness or darkness) of objects expressed within an X-ray image is a function of the relative radiolucence and/or radiopacity of the objects, which is typically a function of the densities of the objects. Therefore, one or more colorimetric or other visual analyses of the X-ray images may be performed in order to recognize the portions of the image that correspond to items… the system may determine a number of items, the orientations of such items, as well as a condition of the items within the container; it is noted since the object is identified by density, therefore, those identified objects has higher density than the container itself). Regarding Claim 5, the combination of Graybill, Ikits and Ito teaches the invention in Claim 1. The combination further teaches further comprising generating, by the computing device, digital content including the [[ uncolored ]] three-dimensional digital model (Graybill, Column 4, Line 40-45, X-ray computed tomographic techniques may now use computers to generate “slices,” or parallel images of portions of a three-dimensional scanned object). But, Graybill does not explicitly the image is uncolored. However, Ikits teaches generating an uncolored three-dimensional digital model of the object (Ikits, Column 5, Line 34-39, “color module 116 may be configured to provide grayscale color settings; it is noted by adjusting the grayscale color can set to uncolored image). As explained in rejection of claim 1, the obviousness for combining of uncovered image of Ikits into Graybill is provided above. Regarding Claim 6, the combination of Graybill, Ikits and Ito teaches the invention in Claim 1. The combination further teaches further comprising: detecting, by the computing device, features of a digital image that correspond to features of the [[ uncolored ]] three-dimensional digital model (Graybill, Abstract, “X-ray imaging, may be used to identify information regarding the contents of a container or other sealed object without having to open the container or the sealed object”); generating, by the computing device, color for the [[ uncolored ]] three-dimensional digital model based on the detecting (Graybill, Column 4, Line 57-62, analysts of images generated by … multiple X-ray images of an object may be captured from different perspectives … such systems to detect components of the object within such images…Column 8, Line 45-47, The projector 224 may be configured to generate and project full color single images, such as a digital X-ray image, or, alternatively, full motion video images); and outputting, by the computing device, a colored three-dimensional digital model via a user interface without removing the object contained within the packaging from the packaging (Graybill, Abstract, “X-ray imaging, may be used to identify information regarding the contents of a container or other sealed object without having to open the container or the sealed object”; Column 8, Line 8-15, “The reflected light within visible RGB bands may be as outputted from the first spectral band pixel diode detector array can be coupled to an image processor, which may generate an image output that may be displayed on a computer display or outputted to a hard copy medium, and associated with an object”). But, Graybill does not explicitly the image is uncolored. However, Ikits teaches generating an uncolored three-dimensional digital model of the object (Ikits, Column 5, Line 34-39, “color module 116 may be configured to provide grayscale color settings; it is noted by adjusting the grayscale color can set to uncolored image). As explained in rejection of claim 1, the obviousness for combining of uncovered image of Ikits into Graybill is provided above. Regarding Claim 7, the combination of Graybill, Ikits and Ito teaches the invention in Claim 1. The combination further teaches wherein the generating includes: generating an initially colored three-dimensional digital model including colored features that correspond to features of the digital image (Graybill, Column 4, Line 44-46, “generate “slices,” or parallel images of portions of a three-dimensional scanned object.” Column 5, Line 21-45, colors of pixels, or of groups of pixels, in a digital X-ray image may be determined and quantified according to one or more standards, e.g., the RGB (“red-green-blue”) color model… the systems and methods disclosed herein are not limited to any one means or method for generating X-ray images); But, Graybill does not explicitly generating color for at least one portion of the uncolored three-dimensional digital model that is not colored in the initially colored three-dimensional digital model to form the colored three-dimensional digital model generated. However, Ikits teaches generating an uncolored three-dimensional digital model of the object (Ikits, Column 5, Line 34-39, “color module 116 may be configured to provide grayscale color settings; it is noted by adjusting the grayscale color can set to uncolored image). As explained in rejection of claim 6, the obviousness for combining of uncovered image of Ikits into Graybill is provided above. Regarding Claim 13, the combination of Graybill, Ikits and Ito teaches the invention in Claim 6. The combination further teaches further comprising generating digital content as including the colored three-dimensional digital model (Graybill, Column 4, Line 57-62, analysts of images generated by … multiple X-ray images of an object may be captured from different perspectives. Column 8, Line 45-47, The projector 224 may be configured to generate and project full color single images, such as a digital X-ray image, or, alternatively, full motion video images). Regarding Claim 14, the combination of Graybill, Ikits and Ito teaches the invention in Claim 13. The combination further teaches wherein the digital content is a webpage (Graybill, Column 11, Line 29-33, The fulfillment center 210, the worker 230, the glasses 240 and/or the external user 260 may use any web-enabled 30 or Internet applications or features, or any other client-server applications). Regarding Claim 16, it recites limitations similar in scope to the limitations of Claim 2 and therefore is rejected under the same rationale. Regarding Claim 17, it recites limitations similar in scope to the limitations of Claim 3 and therefore is rejected under the same rationale. Regarding Claim 18, it recites limitations similar in scope to the limitations of Claim 6 and therefore is rejected under the same rationale. Regarding Claim 19, it recites limitations similar in scope to the limitations of claim 1 and the combination of Graybill, Ikits and Ito teaches all the limitations as of Claim 1. And Graybill discloses these features can be implemented on a computer readable storage medium (Graybill, Column 12, Line 7-15 Some embodiments of the systems and methods of the present disclosure may also be provided as a computer executable program product including a non-transitory machine-readable storage medium having stored thereon instructions that may be used to program a computer to perform processes or methods described herein). Regarding Claim 22, the combination of Graybill, Ikits and Ito teaches the invention in Claim 1. The combination further teaches wherein the detecting of the features of the second object and the corresponding features of the object contained within the packaging includes one or more of edge detection, gradient detection, [[ blob detection, or identification ]] of one or more movable parts (Graybill, Column 5, Line 32-35, "outlines of objects may be identified in a digital X-ray image according to any number of visual analyses, algorithms or machine-learning tools, such as by recognizing edges <read on edge detection>, contours or outlines of objects in the X-ray image, or of portions of objects"; Column 5, Line 25-30, "textures of features or objects expressed in a digital X-ray image may be identified ... such as by identifying changes in intensities within regions or sectors of the X-ray image <read on gradient detection>"). Regarding Claim 23, the combination of Graybill, Ikits and Ito teaches the invention in Claim 22. The combination further teaches wherein the modifying of the uncolored three-dimensional digital model of the object (Ikits, Column 2, Line 53-57, “retrieving a three-dimensional bone model corresponding to a portion of the anatomy stored in the memory, associating the three-dimensional bone model with the three-dimensional image data such that the three-dimensional bone model” Column 4, Line 26-30, 65-67, System 100 may receive such information from a user via an input/output (I/O) interface…The systems and methods described herein may generally create interactive high-quality virtual radiographs, also referred to herein as x-ray images; Column 5, Line 34-39, “color module 116 may be configured to provide grayscale color settings <read on uncolored>). As explained in rejection of claim 1, the obviousness for combining of uncovered image of Ikits into Graybill is provided above. Graybill does not explicitly disclose but Ito teaches wherein the modifying of the [[ uncolored ]] three-dimensional digital model of the object includes modifying positioning of at least one of the one or more movable parts (Ito, Paragraph [0041], "The user-specified points 512 may also be moved and moved again by a user as desired"; Paragraph [0043], "A mesh of the three dimensional model of the object is then generated as a mapping of respective ones of the user-specified points to respective ones of the estimated points in the plurality of images"). Ito and Graybill are analogous since both are directed to processing image data to detect features of an object and generate or manipulate a three-dimensional model. Graybill provides a way of recognizing edges/outlines/contours in scan images of an object within packaging (i.e. , feature detection such as edge detection) and generating a three-dimensional model of the object. Ito provides a way of detecting landmarks/feature points, determining correspondences, and modifying a three dimensional model by moving those points to update portions of the model. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate Ito's landmark based model modification technique into the modified invention of Graybill such that, after performing feature detection and generating the three-dimensional model, the system can identify parts/regions of the object using feature points and modify the positioning of those parts/regions in the model by adjusting the corresponding points. Claim(s) 8-9, 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Graybill et al. (US 9686481 B1, hereinafter Graybill) in view of Ikits (US 9443346 B2), further in view of Ito et al. (US 20170212661 A1, hereinafter Ito) as applied to Claim 1, 15 above and further in view of Huang et al. (US-20160012646-A 1, hereinafter Huang). Regarding Claim 8, the combination of Graybill, Ikits and Ito teaches the invention in Claim 7. The combination does not explicitly disclose but Huang teaches wherein the generating of the color for the at least one portion that is not colored is based on detecting a different portion of the initially colored three-dimensional digital model as corresponding to the at least one portion (Huang, Fig. 2N, Step 305, Paragraph [0091 ], "matching, based on 3D structure, a portion of the first 30 distribution of colored points, to a portion of the second 30 distribution of colored points"). Huang and Graybill are analogous since both of them are dealing with 3D modeling. Graybill provided a way of scanning the object inside container using x-ray in augmented reality. Huang provided a way of coloring the 3D modeling based on the alignment of 2D image interactively for regions of image not colored. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate interactive coloring of images taught by Huang into modified invention of Graybill such that when creating the 3D modeling, system will be able to dynamically coloring uncolored imager by comparing with the image in order to colored the 3D images completely. Regarding Claim 9, the combination of Graybill, Ikits and Ito teaches the invention in Claim 7. The combination does not explicitly disclose but Huang teaches wherein the generating of the color for the at least one portion that is not colored is based on stretching color from another portion of the initially colored three-dimensional digital model as supplying the color for the at least one portion (Huang, Fig. 2N, Step 305, Paragraph [0091], "matching, based on 3D structure, a portion of the first 30 distribution of colored points, to a portion of the second 30 distribution of colored points"}. Huang and Graybill are analogous since both of them are dealing with 3D modeling. Graybill provided a way of scanning the object inside container using x-ray in augmented reality. Huang provided a way of coloring the 3D modeling based on the alignment of 2D image interactively for regions of image not colored. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate interactive coloring of images taught by Huang into modified invention of Graybill such that when creating the 3D modeling, system will be able to dynamically coloring uncolored imager by comparing with the image in order to colored the 3D images completely. Regarding Claim 11, the combination of Graybill, Ikits and Ito teaches the invention in Claim 6. The combination does not explicitly disclose but Huang teaches wherein the generating of the color includes stretching a portion of the digital image and compressing another portion of the digital image as aligning a perspective of the uncolored three-dimensional digital model to a perspective of the object in the digital image (Huang, Paragraph [0080], "the model post-processing may include model or mesh compression" Fig. 2N, Step 305, Paragraph [0091], "matching, based on 3D structure, a portion of the first 30 distribution of colored points, to a portion of the second 30 distribution of colored points; [0007], In certain embodiments, the processor may minimize an alignment energy between the first 3D distribution of colored points and the second 3D distribution of colored points; [0064], the image may include a two-dimensional (2D} array of pixels, and a third dimension corresponding to depth values associated with the pixels). Huang and Graybill are analogous since both of them are dealing with 3D modeling. Graybill provided a way of scanning the object inside container using x-ray in augmented reality. Huang provided a way of compression during the 3D modelling process. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate compression taught by Huang into modified invention of Graybill such that when creating the 3D modeling, system will be able to use compression process and may reduce and/or simplify mesh data for transmission and/or storage. Claim(s) 10 s/are rejected under 35 U.S.C. 103 as being unpatentable over Graybill et al. (US 9686481 B1, hereinafter Graybill) in view of Ikits (US 9443346 B2), further in view of Ito et al. (US 20170212661 A1, hereinafter Ito) as applied to Claim 1, 15 above and further in view of Kanaujia et al. (US 20130250050 A1, hereinafter Kanaujia). Regarding Claim 10, the combination of Graybill, Ikits and Ito teaches the invention in Claim 6. The combination further teaches wherein the generating of the color includes [[ linear blend skinning ]] of the digital image to the uncolored three-dimensional digital model and outputting the uncolored three-dimensional digital model (Ikits, Column 5, Line 34-39, “Attenuation model module 114 may generally be configured to indicate such a feature in an x-ray image output by system 100, and color module 116 may be configured to provide grayscale color settings for the image for display”; it is noted by adjusting the grayscale color can set to uncolored image). As explained in rejection of claim 6, the obviousness for combining of uncovered image of Ikits into Graybill is provided above. However, the combination does not explicitly disclose linear blend skinning. But, Kanaujia teaches wherein the generating of the color includes linear blend skinning of the digital image to the uncolored three-dimensional digital model (Kanaujia, Paragraph [0090], [0105], [0117], "the initial pose hypotheses are refined. This may include generation of a coarse 3D human shape model for each pose and comparing the same to the extracted 3D visual hull obtained in step 5103" "2D shapes of the silhouette are used in discriminative 3D pose prediction" "Linear Blend Skinning (LBS} may be used for efficient non-rigid deformation of skin as a function of an underlying skeleton"}. Kanaujia and Graybill are analogous since both of them are dealing with 3D modeling. Graybill provided a way of scanning the object inside container using x-ray in augmented reality. Kanaujia provided a way of using linear blend skinning function whiling creating the 3D modeling image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate linear blend skinning taught by Kanaujia into modified invention of Graybill such that when creating the 3D modeling, system will be able to use Linear Blend Skinning function dynamically and accurately linearly regression map between the 2D image and 3D image. Claim(s) 12 s/are rejected under 35 U.S.C. 103 as being unpatentable over Graybill et al. (US 9686481 B1, hereinafter Graybill) in view of Ikits (US 9443346 B2), further in view of Ito et al. (US 20170212661 A1, hereinafter Ito) as applied to Claim 1, 15 above and further in view of Harp et al. US 20150186418 A1, hereinafter Harp) Regarding Claim 12, the combination of Graybill, Ikits and Ito teaches the invention in Claim 6. The combination further teaches further comprising generating, by the computing device, [[ a search query based on ]] the uncolored three-dimensional digital model However, Ikits teaches generating an uncolored three-dimensional digital model of the object (Ikits, Column 5, Line 34-39, “color module 116 may be configured to provide grayscale color settings; it is noted by adjusting the grayscale color can set to uncolored image). As explained in rejection of claim 1, the obviousness for combining of uncovered image of Ikits into Graybill is provided above. However, the combination does not explicitly disclose a search query. But , Harp teaches generating, by the computing device, a search query based on the uncolored three-dimensional digital model (Harp, Paragraph [0002], a given object may be scanned from a number of different angles, and the scanned images can be combined to generate the 3D image of the object. [0004], The method further comprises identifying a 3D model that corresponds to the object indicated by the search query from within a database of three-dimensional (3D) object data models). Harp and Graybill are analogous since both of them are dealing with 3D modeling. Graybill provided a way of scanning the object inside container using x-ray in augmented reality. Harp provided a way for user to search before scanning and generating the image during the 3D modelling process. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate searching query taught by Harp into modified invention of Graybill such that when creating the 3D modeling, system will be able to create the image based on user search query result which provide more user friendly and to create customized data closer to user requirement. Response to Arguments Applicant’s arguments with respect to claim 1, 15, 19, filed on 1/2/2026., with respect to rejection under 35 USC § 103 have been considered but are moot in view of the new ground(s) of rejection. It has now been taught by the combination of Graybill, Ikits and Ito. In regard to Claims 2-14, 17-18 they directly/indirectly depends on independent Claim 1, 15 respectively. Applicant does not argue anything other than the independent claim 1, 15. The limitations in those claims in conjunction with combination previously established as explained. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20130293539 A1 Volume dimensioning systems and methods US 20020139853 A1 Planar laser illumination and imaging (PLIIM) system employing wavefront control methods forreducing the power of speckle-pattern noise digital images acquired by said system Any inquiry concerning this communication or earlier communications from the examiner should be directed to YUJANG TSWEI whose telephone number is (571)272-6669. The examiner can normally be reached 8:30am-5:30pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached on (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YuJang Tswei/Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Jun 19, 2024
Application Filed
May 17, 2025
Non-Final Rejection — §103, §DP
Jul 01, 2025
Examiner Interview Summary
Jul 01, 2025
Applicant Interview (Telephonic)
Aug 21, 2025
Response Filed
Sep 27, 2025
Final Rejection — §103, §DP
Dec 17, 2025
Applicant Interview (Telephonic)
Dec 17, 2025
Examiner Interview Summary
Jan 02, 2026
Request for Continued Examination
Jan 21, 2026
Response after Non-Final Action
Feb 22, 2026
Non-Final Rejection — §103, §DP
Apr 15, 2026
Applicant Interview (Telephonic)
Apr 15, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579805
AUGMENTED, VIRTUAL AND MIXED-REALITY CONTENT SELECTION & DISPLAY FOR TRAVEL
2y 5m to grant Granted Mar 17, 2026
Patent 12579838
Perspective Distortion Correction on Faces
2y 5m to grant Granted Mar 17, 2026
Patent 12567213
COMPUTER VISION AND ARTIFICIAL INTELLIGENCE METHOD TO OPTIMIZE OVERLAY PLACEMENT IN EXTENDED REALITY
2y 5m to grant Granted Mar 03, 2026
Patent 12567189
RELATIONAL LOSS FOR ENHANCING TEXT-BASED STYLE TRANSFER
2y 5m to grant Granted Mar 03, 2026
Patent 12561930
PARAMETRIC EYEBROW REPRESENTATION AND ENROLLMENT FROM IMAGE INPUT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+17.0%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 447 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month