DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 9/19/2025 has been entered.
Claims Status
Claims 3-6, 10-13, and 17-19 are cancelled.
Claims 1-2, 7-9, 14-16, and 20 remain pending and stand rejected.
Response to Arguments
I. Applicant’s arguments made with respect to the rejection under 35 USC 103 have been fully considered but are moot in view of new grounds of rejection. Applicant’s amendment necessitated the new grounds of rejection.
II. Applicant’s arguments, made with respect to accompanying amendments, concerning the rejection under 35 USC 101 have been fully considered and are persuasive. The Examiner emphasizes Applicant’s assertion that the claimed invention as amended provides a specific method of generating and display object visualizations in an extended reality environment. The claims now recite, in combination, initiate the extended reality platform for display on a computing device, which comprises a stereoscopic head-mounted display and at least one head motion tracking sensor that is utilized in presenting the extended reality platform as a three-dimensional environment, apply one or more conversion algorithms to the plurality of images to extract a three-dimensional shape from the plurality of images, and generate and display (based on the extracted three-dimensional shape) a virtual rendering of the physical object.
While the claims may still “recite” an abstract idea, the ordered combination of elements as claimed is effective to integrate any recited exception into a practical application, or alternative provides “significantly more”. These limitations discussed above, taken as an ordered combination, go beyond merely instructing the computer to implement an abstract idea on a computer, and beyond generally linking the abstract idea to a particular technological environment. The ordered combination of limitations (above) is central to the inventive concept asserted by Applicant and reflect the specific operation of the computerized components to trigger a particular functionality for the generation of physical object visualizations within a three-dimensional environment rendered via a stereoscopic head mounted display.
Even assuming each individual operation of the computer was known in vacuo, the ordered combination do not correspond to any of the conventional activities articulated in the
MPEP (most notably MPEP 2106.05(d)(II)). Furthermore, under step 2B one of ordinary skill
would not have readily concluded that this ordered set of operations performed was widely
prevalent or in common use in the relevant industry at the time of invention (see:
3.Memorandum - Revising 101 Eligibility Procedure in view of Berkheimer v. HP, Inc. (April 19,
2018)).
Accordingly, the Examiner find the claims eligible under 35 USC 101.
Examiner Comment – Claims term interpretation
The term technical specification appears in claims 1, 8 and 15; however, the specification does not provide a definition of the term nor does the specification provide any examples of technical specifications.
As understood in the art, a technical specification may comprise any specification of details of a physical object such as size, color, dimensions, shape, etc. and may be received in various forms including (but not limited too) manual input, images, drawings, figures, files (e.g., CAD files), or the like.
Accordingly, technical specification has been interpreted as any type of input defining one or more characteristics of a physical object.
Claim Rejections - 35 USC § 112(a)
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-2, 4, 6-9, 11, 13-16 and 18-20 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding claim 1, claim 1 recites the following limitations:
apply one or more conversion algorithms to the plurality of images to extract a three-dimensional shape from the plurality of images, wherein at least one input to the one or more conversion algorithms comprises one or more technical specifications associated with the physical object (emphasis added).
This subject matter of the claim does not conform to the disclosure in such a manner that one of ordinary skill in the art would recognize that applicant actually had possession of at the time of the invention. The most pertinent portion of the disclosure is as follows:
[0058] Next, as shown in block 210, the process flow includes generating and displaying,
via the XR platform, a virtual rendering of the physical object. In some embodiments, the system may receive, from the third party system, the managing entity system, and/or the user, a plurality of images of the physical object. The system may then apply one or more image processing techniques to generate a two-dimensional or three-dimensional virtual rendering. The image processing techniques may include one or more conversion algorithms (e.g. linear perspective algorithms, atmosphere scattering algorithms, binocular disparity algorithms, shape from shading algorithms, and/or the like) and may extract a three-dimensional shape from the plurality of images using a depth map and/or the like. In some embodiments, the system may receive, from the third party system, the managing entity system, and/or the user, a plurality of technical specifications of the physical object. The system may then input one or more technical specifications to a machine learning engine or algorithm configured to convert the technical specifications into a virtual rendering.
Notably, 0010 provides further literal support, while 0047 discloses at a high level converting spoken information into useable digital information (but does not relate this to conversion performed by a machine learning model on inputted technical specifications).
The limitation wherein at least one input to the one or more conversion algorithms comprises one or more technical specifications associated with the physical object requires technical specifications as input to the conversion algorithm; however, 0058 discloses that the image(s) are provided to the conversion algorithm – not the technical specification. The technical specification is actually provided to a “a machine learning engine or algorithm configured to convert the technical specifications into a virtual rendering” in 0058.
As understood from paragraph 0058, the technical specification is a separate input to a separate machine learning model, rather than an input to the conversion algorithm(s) (i.e., it is the images, not the technical specification, provided to the conversion algorithm.) This is underscored by the phrasing in 0058: “The system may then input one or more technical specifications to a machine learning engine or algorithm configured to convert the technical specifications into a virtual rendering.”. This phrasing indicates that the input to the machine learning model/algorithm occurs only after the images have been processed by the conversion algorithm(s).
Furthermore, none of paragraphs 0058, 0010, or 0047 provide sufficient detail with respect to how the machine learning engine or algorithm configured to “convert” the technical specifications into a virtual rendering. This is nothing more than a "black box", which is not sufficient. Lastly, the technical specification itself remains undefined throughout the specification, is not linked in any manner to the images provided to the conversion algorithm, and the specification fails to provide any examples of what the technical specifications may comprise.
Accordingly, the subject matter wherein at least one input to the one or more conversion algorithms comprises one or more technical specifications associated with the physical object was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor at the time the application was filed, had possession of the claimed invention.
Regarding claims 2 and 7, claims 2 and 7 depend from claim 1, inherit those deficiencies, and are rejected therewith.
Regarding claims 8-9 and 14 (computer program product), and claims 15-16 and 20 (method), these claims recite at least substantially similar concepts and elements as recited in claims 1-2 and 7 such that similar analysis of the claims would be readily apparent to one of ordinary skill in the art. As such, claims 8-9 and 14 and claims 15-16 and 20 are rejected under at least similar rationale under 35 USC 112(a).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, 7-9, 14-16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Friesen (US 2024/0338711 A1) in view of Oser (US 20240273597 A1), Gurnani (2016/0019547) and Wang (US 2023/0419392 A1).
Regarding claim 1, Friesen discloses a system (e.g., Fig. 9 #920 & 924, 0259) for digital object transfer in a network environment, the system comprising:
at least one non-transitory storage device storing network platform (see: 0004 (e.g., memory), 0032 (dedicated application or web browser), 0256-0257, 0261, Fig. 9 #900, 904, & 906); and
at least one processing device coupled to the at least one non-transitory storage device, wherein the at least one processing device (see: 0004 (e.g., processor), 0260, Fig. 9 #920, 924 & 952) is configured to:
initiate the network platform for display on a computing device associated with a user (see: Fig. 2A, 0044, 0193);
display, via the network platform, at least one transaction associated with the user (see: Fig. 2C #212A-212N, 0048, Fig. 5A #502, );
electronically receive, via the network platform, a first user input selecting a first transaction associated with a third party system, wherein the first transaction is further associated with a physical object (see: Fig. Fig. 1 #106, Fig. 2C, 0048);
Note: server 116 is associated with a chain of affiliated stores (i.e., multiple parties). Identified store 106 represents a third party.
generate and display, based on information associated with the physical object, a virtual rendering of the physical object (see: Fig. 2A (note: image representing women’s t-shirt and/or picnic basket));
electronically receive, via the extended reality platform, a second user input comprising one or more interaction options, wherein the one or more interaction options comprises at least a request to exchange the physical object (see: Fig. 2E #226, 0011 (request to receive a replacement), 0061-0062, 0064, Fig. 5A #506-512); and
Note: The customer may select to return an item and receive a replacement (i.e., exchange the item).
transmit, to the third party system, instructions to complete a selected interaction with the user (see: 0132, 0136, 0185, 0226, Fig. 3A-3C, Fig. 7 #702-706).
Friesen discloses all of the above but does not disclose that the network environment is an extended reality, and fails to disclose determine that an object associated with the first transaction is a physical object. Notably, Friesen clearly depicts physical options that are to be returned.
Further, Friesen does not disclose wherein the computing device comprises a stereoscopic head-mounted display and at least one head motion tracking sensor such that the computing device is configured to present the extended reality platform as a three-dimensional environment.
To this accord, Oser disclose a system for enabling a retail experience in CGR environments such as augmented reality, augmented virtuality, virtual reality, and/or mixed reality environments (see: 0004, 0015, 0028-0035) configured to determine that an object associated with the first transaction is a physical object (see: 0013 (product is a physical product; inputs directed to physical product), 0045 (obtain images of physical objects from the real environment), 0064 (AR environment that includes both physical objects and virtual objects), 0070).
Oser also disclose wherein the computing device comprises a stereoscopic head-mounted display and at least one head motion tracking sensor such that the computing device is configured to present the extended reality platform as a three-dimensional environment (see: 0025 (subset of physical motions are tracked, e.g. head turning), 0035 (heads-up displays), 0038 (head-mounted display), 0040, Fig. 1B (108, 110, 116)).
Lastly, Oser additionally discloses that the system is configured to generate and display, based on information associated with the physical object, a virtual rendering of the physical object (see: 0069, 0075, Fig. 3 #302)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the invention of Friesen to have utilized the known technique related to providing extended reality environments as taught by Oser in order to have enabled the system of Friesen to have facilitated more instantaneous feedback, answers to questions, suggestions, demonstrations of products, and the human connection of an in-person shopping experience (see: Oser: 0004), thereby enhancing the customer experience of Friesen.
Additionally, Friesen does not disclose that the system is configured to verify an identity of the user by validating an authentication credential received via the computing device, wherein the authentication credential comprises a predefined user motion. Friesen, however, does disclose the use of a user account (e.g., 0044, 0063) for managing orders.
To this accord, Gurnani discloses a user authentication system that is configured to verify an identity of the user by validating an authentication credential received via the computing device, wherein the authentication credential comprises a predefined user motion (see: 0054 (device motion gesture), 0090-0091, 0154).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the invention of Friesen to have utilized the known technique for authenticating a user of an account as taught by Searl in order to have offered multiple degrees of freedom to select authentication factors and to have improved security while offering a reasonable customer experience (see: Gurnani: 0013, 0054).
Lastly, Friesen in view of Oser discloses all of the above but does not disclose:
accessing, from a third part system, a plurality of images of the physical object;
applying one or more conversion algorithms to the plurality of images to extract a three-dimensional shape from the plurality of images,
wherein at least one input to the one or more conversion algorithms comprises one or more technical specifications associated with the physical object; and,
where the virtual rendering is based on the extracted three-dimensional shape.
To this accord, Wang discloses accessing, from a third part system, a plurality of images of the physical object (see: 0006 (online concierge system receives multiple images of an item from a first client device associated with a shopper), 0056 (based on multiple images));
applying one or more conversion algorithms to the plurality of images to extract a three-dimensional shape from the plurality of images, wherein at least one input to the one or more conversion algorithms comprises one or more technical specifications associated with the physical object (see: 0062 (using photogrammetry, laser scanning, infrared (IR) thermography, or any other suitable technique or combination of technique; determine one or more dimensions of the item), 0095); and,
where the virtual rendering is based on the extracted three-dimensional shape (see: 0056 (generate three-dimensional image of an item), 0062 (use photogrammetric techniques to generate a three-dimensional image of the item based on the image and the dimensions), 0095, Fig. 7B).
The Examiner notes the above interpretation of technical specification, and asserts that the images of the products in Wang serve as a technical specification input to the conversion algorithm.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the invention of Friesen in view of Oser to have utilized the known technique for generating three-dimensional images based on measured dimensions as taught by Wang in order to have improved the ability to communicate characteristics of items to customers and to provide customers more time to examine these characteristics (see: Wang: 0006).
2. The system of claim 1, wherein the at least one processing device is further configured to: electronically receive, via the extended reality platform, the second user input comprising the one or more interaction options, wherein the one or more interaction options comprises at least a request to return the physical object (see: Friesen: Fig. 2 C #214, Fig. 2E #226, 0011 (request to receive a replacement), 0061-0062, 0064, Fig. 5A #506-512).
7. The system of claim 2, wherein when the selected interaction comprises the request to return the physical object, the at least one processing device is further configured to: establish a wireless communication channel with a second third party system (see: Friesen: 0002 (wired or wireless networks), 0034 (WiFi router or cellular communication tower; server system 116 affiliated with retailer), 0035 (chain of affiliated stores), Fig. 1 #106, 116, & 120); and
display, via the extended reality platform, information associated with the second third party system to the user (see: Friesen: Fig. 2D #218 and 220, 0052-0053).
Regarding claims 8-9 and 14 (computer program product) and claims 15-16 and 20 (method), these claims recite at least substantially similar concepts and elements as recited in claims 1-2 and 7 such that similar analysis of the claims would be readily apparent to one of ordinary skill in the art. As such, claims 8-9, 14, 15-16, and 20 are rejected under at least similar rationale.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Searl (US 2022/0164429) discloses a user authentication system that is configured to verify an identity of the user by validating an authentication credential received via the computing device, wherein the authentication credential comprises a predefined user motion (see: 0007, 0010, 0042, 0046-0047).
Note: Searl qualifies as prior art under 102(a)(1) despite having common ownership and being inside the grace period for 102(a)(1). Searl has no common inventors and there is nothing of record that indicates that either the 102(b)(1)(A) or 102(b)(1)(B) exceptions apply [note: there is no exception under 102(a)(1)
Zhao (US 20170251268) discloses verify an identity of the user by validating an authentication credential received via the computing device, wherein the authentication credential comprises a predefined user motion (see: 0165)
Montenegro (US 11769126) discloses verify an identity of the user by validating an authentication credential received via the computing device, wherein the authentication credential comprises a predefined user motion (see: col. 6 lines 17-26, col. 10 lines 24-26, col. 12 lines 50-53).
PTO form 892-U discusses using risk-based authentication to continuously monitor users by mapping their behavioral patterns after log-in, to better distinguish between an authorized user, and that of an unauthorized user or an automated BOT or malware (see: Para 2-3).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM J ALLEN whose telephone number is (571)272-1443. The examiner can normally be reached Monday-Friday, 8:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anita Coupe can be reached at 571-270-3614. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
WILLIAM J. ALLEN
Primary Examiner
Art Unit 3625
/WILLIAM J ALLEN/Primary Examiner, Art Unit 3619