Prosecution Insights
Last updated: April 19, 2026
Application No. 18/499,493

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM

Final Rejection §102§103
Filed
Nov 01, 2023
Examiner
BEUTEL, WILLIAM A
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Canon Kabushiki Kaisha
OA Round
2 (Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
90%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
328 granted / 469 resolved
+7.9% vs TC avg
Strong +20% interview lift
Without
With
+20.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
28 currently pending
Career history
497
Total Applications
across all art units

Statute-Specific Performance

§101
9.9%
-30.1% vs TC avg
§103
49.8%
+9.8% vs TC avg
§102
10.7%
-29.3% vs TC avg
§112
22.0%
-18.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 469 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Response to Amendment Applicant has amended the specification to correct for the previous objection and as such the objection based on the title has been withdrawn. Claims 3-5 and 9 have been amended to correct for the rejection made pursuant to 35 U.S.C. 112(b) and as such the rejection has been withdrawn. Claims 11 and 12 have been amended such that the claims no longer invoke interpretation under 35 U.S.C. 112(f). Response to Arguments Applicant's arguments filed 12/24/2025 have been fully considered but they are not persuasive, in part. Applicant argues that Oser fails to teach or suggest “wherein the function information about the virtual object is function information about an actual product corresponding to the virtual object (see page 11 of applicant’s correspondence filed 12/24/2025). Examiner respectfully disagrees. In particular, Oser teaches presenting a virtual object of a software product for demonstration, which allows a user to interact with different operations of the software presented to the user (see Oser, ¶51: virtual object provides the user with the option to initiate a retail experience in CGR environment 204; ¶58: display of different products that are displayed in CGR environment 204 as virtual objects, e.g. display software 211 as a virtual object, including “Device 200 detects the user's input on keyboard 206-2 (e.g., by identifying keyboard keys the user is contacting), and uses the detected inputs to interact with first software 211. In this way, the user can interact with physical laptop 206 in CGR environment 204 to conduct a full demonstration of the capabilities of first software 211 operating on laptop 206.”) Accordingly, Oser teaches executing a function of the virtual object based on user operation information indicating a user operation, and function information about the virtual object, wherein the function information about the virtual object is function information about an actual product corresponding to the virtual object. As such, applicant’s argument is not persuasive. Applicant’s remaining arguments, filed 12/24/2025, with respect to the rejection(s) of claim(s) 1-6, 8-15 under 35 U.S.C. 102/103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Moyal et al. Allowable Subject Matter Claim 5 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 14 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Oser (US 2021/0004137 A1). Regarding claim 14, Oser discloses: An information processing method for generating virtual space, the information processing method comprising the steps of: (Oser, Abstract: generating computer generated reality environment, and presenting virtual object representing the product in the computer-generated environment; Fig. 1A and ¶¶36-37: system including device for computer-generated reality technologies; ¶28: virtual reality) Displaying a model of a user and a virtual object in the virtual space, (Oser, ¶28: virtual reality environment with virtual objects within computer-generated environment; also ¶¶32-33: opaque display for presenting virtual objects in AR environment, and simulated environment as representation of physical environment; Fig. 2D and ¶58: first software 211 displayed as virtual object; Fig. 2C and ¶52: the communication session is implemented such that a representation of a contact associated with the retailer (e.g., a salesperson) is displayed in CGR environment 204, and the user is able to interact with the retailer by speaking to the salesperson and manipulating virtual objects in CGR environment 204, where avatar 215 is a virtual representation of a salesperson associated with the retailer contacted using device 200 – i.e. “salesperson” is a user even though remote; also note Fig. 2G and ¶62 discloses displayed user’s hand – i.e. alternative “model of a user”) Changing at least one of the model and the virtual object to be displayed (Oser, ¶54: In some embodiments, avatar 215 represents a computer-generated position of the salesperson in CGR environment 204, and the changes made to virtual objects by the salesperson are shown in CGR environment 204 as being performed by avatar 215. In some embodiments, the salesperson participates in the communication session using a device similar to device 200. – i.e. “model” change; ¶76: in some embodiments, the information received from the communication session is a modification request initiated by a second electronic device associated with the salesperson, where modifying the appearance of the computer-generated reality environment based on the modification request includes presenting a modification of the object (e.g., a change in the orientation/pose of a virtual object, adding a virtual object to the environment, removing a virtual object from the environment; ¶77: adjusting the presentation of the virtual object representing the product in the computer-generated reality environment using information received from the communication session includes presenting a different virtual object – i.e. “virtual object” change), and Acquiring user operation information indicating a user operation, (Oser, ¶54: The salesperson is capable of communicating with the user and manipulating virtual objects displayed in CGR environment 204. Thus, inputs provided by the salesperson can effect a change in CGR environment 204 that is experienced by both the user and the salesperson… where salesperson participates in communication session using similar device to user; ¶¶79-80: while providing communication session, device detects input and in response to detecting input, modifies the virtual object representing the product in the computer-generated reality environment based on the detected input (e.g., modifying an appearance of the object (e.g., virtual object) based on the detected input)) Wherein the changing step changes at least one of an operation of the model in the virtual space or a display of the virtual object based on the user operation information and function information about the virtual object acquired from the memory (Oser, ¶54: The salesperson is capable of communicating with the user and manipulating virtual objects displayed in CGR environment 204. Thus, inputs provided by the salesperson can effect a change in CGR environment 204 that is experienced by both the user and the salesperson. For example, the salesperson can manipulate the position of a virtual object, and the user can see, via device 200, the manipulation of the virtual object in CGR environment 204. Also ¶54: In some embodiments, avatar 215 represents a computer-generated position of the salesperson in CGR environment 204, and the changes made to virtual objects by the salesperson are shown in CGR environment 204 as being performed by avatar 215. In some embodiments, the salesperson participates in the communication session using a device similar to device 200 – i.e. the salesperson input change the operation of the avatar model – see Figs. 2C-2F showing changing position of salesmen ¶58: In some embodiments, the display of first software 211 is initiated by the salesperson to, for example, provide a demonstration of software that is capable of operating using laptop 206; ¶¶79-80: while providing communication session, device detects input and in response to detecting input, modifies the virtual object representing the product in the computer-generated reality environment based on the detected input (e.g., modifying an appearance of the object (e.g., virtual object) based on the detected input).) Wherein the function information about the virtual object is function information about an actual product corresponding to the virtual object (Oser, ¶51: virtual object provides the user with the option to initiate a retail experience in CGR environment 204; ¶58: display of different products that are displayed in CGR environment 204 as virtual objects, e.g. display software 211 as a virtual object, including “Device 200 detects the user's input on keyboard 206-2 (e.g., by identifying keyboard keys the user is contacting), and uses the detected inputs to interact with first software 211. In this way, the user can interact with physical laptop 206 in CGR environment 204 to conduct a full demonstration of the capabilities of first software 211 operating on laptop 206.”) Wherein the processor is configured to change at least one of the operation of the model or the display of the virtual object in a case where it is determined, using the user operation information and the function information, that the user operation does not correspond to an actual operation of the actual product (The broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent are not met – see MPEP 2111.04; Accordingly, Oser teaches the limitations as “the case where” is a contingency that is not required by the method claim) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 13, and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Oser (US 2021/0004137 A1) in view of Moyal et al. (US 2023/0196450 A1). Regarding claim 1, Oser discloses: An information processing apparatus for generating virtual space and a virtual object, (Oser, Abstract: generating computer generated reality environment, and presenting virtual object representing the product in the computer-generated environment; Fig. 1A and ¶¶36-37: system including device for computer-generated reality technologies; ¶28: virtual reality; Also note ¶25: a computer-generated reality (CGR) environment refers to a wholly or partially simulated environment that people sense and/or interact with via an electronic system) the information processing apparatus comprising: a memory storing instructions (Oser, Fig. 1A and ¶37: memor(ies) 106; ¶41: memory(ies) 106 are one or more non-transitory computer-readable storage mediums (e.g., flash memory, random access memory) that store computer-readable instructions configured to be executed by processor(s) 102); and a processor configured to execute the instructions to (Oser, Fig. 1A and ¶37: processor(s) 102; ¶41: memory(ies) 106 are one or more non-transitory computer-readable storage mediums (e.g., flash memory, random access memory) that store computer-readable instructions configured to be executed by processor(s) 102): display the virtual object in the virtual space (Oser, ¶28: virtual reality environment with virtual objects within computer-generated environment; also ¶¶32-33: opaque display for presenting virtual objects in AR environment, and simulated environment as representation of physical environment; Fig. 2D and ¶58: first software 211 displayed as virtual object; Also Fig. 2G and ¶62: virtual objects displayed in CGR environment, where user can interact with products 220, including e.g. smartphone 220-1 as a virtual object shown in CGR environment), and execute a function of the virtual object in the virtual space based on user operation information indicating a user operation, and function information about the virtual object acquired from the memory (Oser, Fig. 2D and 58: In some embodiments, the display of first software 211 is initiated by the salesperson to, for example, provide a demonstration of software that is capable of operating using laptop 206. For example, the salesperson can control aspects of the retail experience to initiate display of different products that are displayed in CGR environment 204 as virtual objects. Although laptop 206 is not operating first software 211 in the physical environment, device 200 displays first software 211 as a virtual object appearing on display screen 206-1 to provide, in CGR environment 204, the appearance of first software 211 operating on laptop 206. In some embodiments, first software 211 represents fully interactive software that is responsive to inputs provided using laptop 206. For example, the user can interact with laptop 206 (e.g., by typing on keyboard 206-2) in CGR environment 204. Device 200 detects the user's input on keyboard 206-2 (e.g., by identifying keyboard keys the user is contacting), and uses the detected inputs to interact with first software 211. In this way, the user can interact with physical laptop 206 in CGR environment 204 to conduct a full demonstration of the capabilities of first software 211 operating on laptop 206. Fig. 2G and ¶62: virtual smartphone is fully functioning, such that when device 200 detects an input on smartphone 220-1, device 200 displays the smartphone 220-1 responding in a same manner as the physical smartphone would respond to the same input in a physical environment – Note that detection of a user input and performing some response to the detected input inherently requires “user operation information indicating a user operation”, as this is how software works for user input on computers; Also ¶70: the virtual object (e.g., smartphone 220-1) representing the product is a virtual representation of the physical product (e.g., a virtual smartphone or virtual tablet) configured to perform the set of operations in response to detecting the first set of inputs directed to the virtual representation of the physical product; ¶79: device detects input, and in response, modifies virtual object based on detected input; ¶83: instructions for performing the features of the technique are included in memories). Wherein the function information about the virtual object is function information about an actual product corresponding to the virtual object (Oser, ¶51: virtual object provides the user with the option to initiate a retail experience in CGR environment 204; ¶58: display of different products that are displayed in CGR environment 204 as virtual objects, e.g. display software 211 as a virtual object, including “Device 200 detects the user's input on keyboard 206-2 (e.g., by identifying keyboard keys the user is contacting), and uses the detected inputs to interact with first software 211. In this way, the user can interact with physical laptop 206 in CGR environment 204 to conduct a full demonstration of the capabilities of first software 211 operating on laptop 206.”; ¶62: virtual smartphone is fully functioning, such that when device 200 detects an input on smartphone 220-1, device 200 displays the smartphone 220-1 responding in a same manner as the physical smartphone would respond to the same input in a physical environment) and Wherein the processor is configured to execute the function corresponding to the function information in a case where it is determined, using the user operation information and the function information, that a user operation is executable (Oser, ¶62: virtual smartphone is fully functioning, such that when device 200 detects an input on smartphone 220-1, device 200 displays the smartphone 220-1 responding in a same manner as the physical smartphone would respond to the same input in a physical environment; ¶70: In some embodiments, the virtual object (e.g., smartphone 220-1) representing the product is a virtual representation of the physical product (e.g., a virtual smartphone or virtual tablet) configured to perform the set of operations in response to detecting the first set of inputs directed to the virtual representation of the physical product (e.g., the virtual smartphone/tablet is configured to turn on and display, on a virtual screen (e.g., 224), a homepage with applications (e.g., 226) in response to an input on a virtual power button presented as appearing on the virtual smartphone/tablet). ¶79: device detects input, and in response, modifies virtual object based on detected input) The only limitation not explicitly taught by Oser is the obtaining of function information according to a product specification of the actual product. Moyal discloses: Wherein the processor is configured to execute the function corresponding to the function information in a case where it is determined, using the user operation information and the function information, that a user operation is executable according to a product specification of the actual product (Moyal, ¶40: system 200 may include virtual product component 220 configured to generate virtual models of one or more electronic device products available for sale at a retail facility, e.g., virtual product component 220 may be configured to access a retail facility server computer to receive product data comprising product features (e.g., images, specifications, capabilities) and generate virtual models of the products for implementation within a virtual reality environment displayed within a user interface of user device 120; ¶41: generate a virtual reality environment comprising visual elements corresponding to one or more of the user environment and electronic product devices; ¶47: computer-implemented method 300 may include one or more processors configured for generating a virtual reality simulation of the user input within the augmented reality environment based on the user specified requirement and the product specification data, wherein the virtual reality simulation demonstrates how the one or more electronic devices will function according to the user specified requirements in the user environment, providing example of audio system as if installed in the wall of the room; ¶56: virtual product data based on the product specification data, wherein generating the virtual models is based on the virtual product data) Both Oser and Moyal are directed to virtual reality shopping systems allowing a user to experience the functionality of a product prior to purchase. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and a reasonable expectation of success, to modify the system and method for providing interactive products for viewing by a user as provided by Oser, by incorporating the technique of using product specification data for providing the functionality of a product for simulation as provided by Moyal, using known electronic interfacing and programming techniques. The modification merely substitutes one source of information regarding functionality of a device for another, yielding predictable results of obtaining data related to a product from product specifications as opposed to other sources for determining and providing simulated functionality. Moreover, the modification results in an improved shopping experience by ensuring a product functionality corresponds to the actual product using the product specification for more accurate demonstrations. Regarding claim 13, the apparatus of claim 1 performs the method of claim 13 and as such claim 13 is rejected based on the same rationale as claim 1 set forth above. Regarding claim 15, Oser discloses: A non-transitory computer-readable storage medium storing a program that causes a computer to execute the information processing method (Oser, Fig. 1A and ¶37: memor(ies) 106; ¶41: memory(ies) 106 are one or more non-transitory computer-readable storage mediums (e.g., flash memory, random access memory) that store computer-readable instructions configured to be executed by processor(s) 102) Further regarding claim 15, the method is according to claim 13, and therefore claim 15 is further rejected based on the same rationale as claim 13 set forth above. Regarding claim 3, Oser further discloses: Wherein the processor is configured to: acquire the user operation information (Oser, ¶54: inputs provided by user to manipulate virtual objects); Determine whether the user operation is executable using the user operation information and the function information; and execute the function in a case where it is determined that the user operation is executable. (Oser, ¶58: In some embodiments, first software 211 represents fully interactive software that is responsive to inputs provided using laptop 206. For example, the user can interact with laptop 206 (e.g., by typing on keyboard 206-2) in CGR environment 204. Device 200 detects the user's input on keyboard 206-2 (e.g., by identifying keyboard keys the user is contacting), and uses the detected inputs to interact with first software 211. In this way, the user can interact with physical laptop 206 in CGR environment 204 to conduct a full demonstration of the capabilities of first software 211 operating on laptop 206.; Also ¶70: the virtual object (e.g., smartphone 220-1) representing the product is a virtual representation of the physical product (e.g., a virtual smartphone or virtual tablet) configured to perform the set of operations in response to detecting the first set of inputs directed to the virtual representation of the physical product) Claim(s) 2 and 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Oser (US 2021/0004137 A1) in view of Moyal et al. (US 2023/0196450 A1) and in further view of Anderson et al. (US 2014/0282278 A1). Regarding claim 2, the limitations included from claim 1 are rejected based on the same rationale as claim 1 set forth above. Further regarding claim 2, Oser modified by Moyal further discloses: determining functions according to the product specification of the actual product (Moyal, ¶40: system 200 may include virtual product component 220 configured to generate virtual models of one or more electronic device products available for sale at a retail facility, e.g., virtual product component 220 may be configured to access a retail facility server computer to receive product data comprising product features (e.g., images, specifications, capabilities) and generate virtual models of the products for implementation within a virtual reality environment displayed within a user interface of user device 120; ¶41: generate a virtual reality environment comprising visual elements corresponding to one or more of the user environment and electronic product devices; ¶47: computer-implemented method 300 may include one or more processors configured for generating a virtual reality simulation of the user input within the augmented reality environment based on the user specified requirement and the product specification data, wherein the virtual reality simulation demonstrates how the one or more electronic devices will function according to the user specified requirements in the user environment, providing example of audio system as if installed in the wall of the room; ¶56: virtual product data based on the product specification data, wherein generating the virtual models is based on the virtual product data) Both Oser and Moyal are directed to virtual reality shopping systems allowing a user to experience the functionality of a product prior to purchase. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and a reasonable expectation of success, to modify the system and method for providing interactive products for viewing by a user as provided by Oser, by incorporating the technique of using product specification data for providing the functionality of a product for simulation as provided by Moyal, using known electronic interfacing and programming techniques. The modification merely substitutes one source of information regarding functionality of a device for another, yielding predictable results of obtaining data related to a product from product specifications as opposed to other sources for determining and providing simulated functionality. Moreover, the modification results in an improved shopping experience by ensuring a product functionality corresponds to the actual product using the product specification for more accurate demonstrations. The only limitation not explicitly taught is that when a function is not executable, the system performs a predetermined function different from the function information. Anderson discloses: Wherein the processor is configured to execute a predetermined function that is different from the function corresponding to the function information in a case where it is determined that the user operation is not executable (Anderson, ¶32: In block 406, the computing device 100 determines whether an input gesture has been recognized. If no input gesture was recognized, the method 400 loops back to block 402 to continue receiving input sensor data. If a gesture was recognized, the method 400 advances to block 408 – note that the performing of step 402 when user operation is not recognized – i.e. not executable – is a different function than the function corresponding to recognized gesture, block 408) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and a reasonable expectation of success, to modify the system and method for providing interactive products for viewing by a user as provided by Oser, by incorporating the technique of using product specification data for providing the functionality of a product for simulation as provided by Moyal, by using the technique for determining whether user operations are applicable to a particular virtual object as provided by Anderson, using known electronic interfacing and programming techniques. The modification results in an improved interactive virtual object system allowing for better handling gestural inputs to ensure only proper inputs are coordinated with an object in order to more accurately represent an item virtually for a more realistic and accurate demonstration. Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Oser (US 2021/0004137 A1) in view of Moyal et al. (US 2023/0196450 A1) and in further view of Todasco (US 2019/0272139 A1). Regarding claim 4, the limitations included from claim 1 are rejected based on the same rationale as claim 1 set forth above. Further regarding claim 4, Oser does not explicitly disclose the type of permission for executing functions of a virtual object as claimed. Todasco discloses: wherein the memory further stores user identification information (Todasco, ¶37: initiation of user device includes registering with server or system, including login information and device identifiers or the like; ¶38: In some examples the virtual object that are received for rendering may depend on an account associated with the user device. For example, there may be a plurality of virtual objects in the portion of the virtual world that the user device receives. At least some of those virtual objects may be designated to particular users and/or accounts.; ¶84: databases store and maintain information in memory), and wherein the instructions when executed, further cause the image processing apparatus to: determine whether a user corresponding to the user identification information is authorized to execute the function of the virtual object, and wherein execution of the function corresponding to the function information is permitted only in a case where it is determined that the user is authorized to execute the function of the virtual object. (Todasco, Abstract: controlling sharing of virtual object in virtual or augmented reality world; ¶15: user may also associate the virtual object with one or more accounts, such that other users with devices registered to the associated accounts may interact with the virtual object; ¶16: As such, a virtual object may be shared between the creator and designated users, but other non-designated users may not be able to interact with the virtual object or even know it exists; ¶25: the settings may designate one or more users with permission to view, detect, and/or manipulate the virtual object; ¶29: permission for a virtual object may depend on whether a user has certain combination of virtual objects in their account's inventory, whether the user has conducted certain actions, whether the user belongs to a certain group, and/or the like; ¶31: the server may store the existence of the virtual object in a database with associated settings and information for rendering by user devices, e.g. the virtual object may be associated with a 3-D model that may be rendered for viewing on a display; ¶38: the user device may receive virtual objects for rendering that are associated with the user account and not virtual objects that are not associated with the user account. In some embodiments, rendering a virtual object may depend on the permission settings of the virtual object in relation to the user account and/or device. In some examples, the user device may render virtual objects for display that are associated with the account and not objects not associated with the account. In some examples, the user settings may cause the virtual object to be seen and/or interacted with only by intended recipients.) Both Oser and Todasco are directed to systems for allowing distribution and interaction with virtual objects within augmented or virtual reality by a remote user. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and a reasonable expectation of success, to modify the system and method for providing interactive products for viewing by a user as provided by Oser, and incorporating the technique of using product specification data for providing the functionality of a product for simulation as provided by Moyal by incorporating the type of permission control over virtual objects as provided by Todasco, using known electronic interfacing and programming techniques. The modification results in an improved distributed interactive virtual object system by better controlling access to particular virtual objects only in situations where a user is an intended recipient, to better control the distribution of information based on an improved management of data. Claim(s) 6, 8 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Oser (US 2021/0004137 A1) in view of Mani et al. (US 2020/0117336 A1). Regarding claim 6, Oser discloses: An information processing apparatus for generating virtual space, (Oser, Abstract: generating computer generated reality environment, and presenting virtual object representing the product in the computer-generated environment; Fig. 1A and ¶¶36-37: system including device for computer-generated reality technologies; ¶28: virtual reality) the information processing apparatus comprising: A memory storing instructions (Oser, Fig. 1A and ¶37: memor(ies) 106; ¶41: memory(ies) 106 are one or more non-transitory computer-readable storage mediums (e.g., flash memory, random access memory) that store computer-readable instructions configured to be executed by processor(s) 102); and A processor configured to execute the instructions to: (Oser, Fig. 1A and ¶37: processor(s) 102; ¶41: memory(ies) 106 are one or more non-transitory computer-readable storage mediums (e.g., flash memory, random access memory) that store computer-readable instructions configured to be executed by processor(s) 102) Display a model of a user and a virtual object in the virtual space, (Oser, ¶28: virtual reality environment with virtual objects within computer-generated environment; also ¶¶32-33: opaque display for presenting virtual objects in AR environment, and simulated environment as representation of physical environment; Fig. 2D and ¶58: first software 211 displayed as virtual object; Fig. 2C and ¶52: the communication session is implemented such that a representation of a contact associated with the retailer (e.g., a salesperson) is displayed in CGR environment 204, and the user is able to interact with the retailer by speaking to the salesperson and manipulating virtual objects in CGR environment 204, where avatar 215 is a virtual representation of a salesperson associated with the retailer contacted using device 200 – i.e. “salesperson” is a user even though remote; also note Fig. 2G and ¶62 discloses displayed user’s hand – i.e. alternative “model of a user”) Change at least one of the model and the virtual object to be displayed (Oser, ¶54: In some embodiments, avatar 215 represents a computer-generated position of the salesperson in CGR environment 204, and the changes made to virtual objects by the salesperson are shown in CGR environment 204 as being performed by avatar 215. In some embodiments, the salesperson participates in the communication session using a device similar to device 200. – i.e. “model” change; ¶76: in some embodiments, the information received from the communication session is a modification request initiated by a second electronic device associated with the salesperson, where modifying the appearance of the computer-generated reality environment based on the modification request includes presenting a modification of the object (e.g., a change in the orientation/pose of a virtual object, adding a virtual object to the environment, removing a virtual object from the environment; ¶77: adjusting the presentation of the virtual object representing the product in the computer-generated reality environment using information received from the communication session includes presenting a different virtual object – i.e. “virtual object” change), and Acquire user operation information, (Oser, ¶54: The salesperson is capable of communicating with the user and manipulating virtual objects displayed in CGR environment 204. Thus, inputs provided by the salesperson can effect a change in CGR environment 204 that is experienced by both the user and the salesperson… where salesperson participates in communication session using similar device to user; ¶¶79-80: while providing communication session, device detects input and in response to detecting input, modifies the virtual object representing the product in the computer-generated reality environment based on the detected input (e.g., modifying an appearance of the object (e.g., virtual object) based on the detected input)) Wherein the processor changes at least one of an operation of the model in the virtual space or a display of the virtual object based on the user operation information and function information about the virtual object acquired from the memory (Oser, ¶54: The salesperson is capable of communicating with the user and manipulating virtual objects displayed in CGR environment 204. Thus, inputs provided by the salesperson can effect a change in CGR environment 204 that is experienced by both the user and the salesperson. For example, the salesperson can manipulate the position of a virtual object, and the user can see, via device 200, the manipulation of the virtual object in CGR environment 204. Also ¶54: In some embodiments, avatar 215 represents a computer-generated position of the salesperson in CGR environment 204, and the changes made to virtual objects by the salesperson are shown in CGR environment 204 as being performed by avatar 215. In some embodiments, the salesperson participates in the communication session using a device similar to device 200 – i.e. the salesperson input change the operation of the avatar model – see Figs. 2C-2F showing changing position of salesmen ¶58: In some embodiments, the display of first software 211 is initiated by the salesperson to, for example, provide a demonstration of software that is capable of operating using laptop 206; ¶¶79-80: while providing communication session, device detects input and in response to detecting input, modifies the virtual object representing the product in the computer-generated reality environment based on the detected input (e.g., modifying an appearance of the object (e.g., virtual object) based on the detected input).) Wherein the function information about the virtual object is function information about an actual product corresponding to the virtual object (Oser, ¶51: virtual object provides the user with the option to initiate a retail experience in CGR environment 204; ¶58: display of different products that are displayed in CGR environment 204 as virtual objects, e.g. display software 211 as a virtual object, including “Device 200 detects the user's input on keyboard 206-2 (e.g., by identifying keyboard keys the user is contacting), and uses the detected inputs to interact with first software 211. In this way, the user can interact with physical laptop 206 in CGR environment 204 to conduct a full demonstration of the capabilities of first software 211 operating on laptop 206.”) Although Oser discloses a demonstration of the product, a possible distinction from the current claim is that the user operation does not correspond to an actual operation of the actual product. This appears to be an attempt to get away from the teachings of Oser which is directed to a software product demonstration, and may possibly be interpreted as the product itself that is interacted with. Mani discloses: Wherein the processor changes at least one of an operation of the model in the virtual space or a display of the virtual object based on the user operation information and function information about the virtual object and wherein the processor is configured to change at least one of the operation of the model or the display of the virtual object in a case where it is determined, using the user operation information and the function information, that the user operation does not correspond to an actual operation of the actual product (Mani, Fig. 4C and ¶83 discloses user hand gesture interacting with a preset virtual object, such as virtual fridge, and based on the hand gesture, moving a first preset virtual object of the fridge, namely the door to swing open; This teaches a change of the display of the virtual object when the operation is performed to a virtual representation of the object, and not to actual operation of the actual product – i.e. the user is interacting with a virtual object representation, and not the physical refrigerator itself; Note ¶60 of Mani discusses augmented reality (AR) and virtual reality (VR) processing and rendering module 276 for generating AR and VR experiences for the user based on the products or virtual representation of the products that user interact with; ¶77: the virtual image processing and rendering system 100 allows the user to visualize one or more home appliances that are recommended to the user in their simulated home setup in the AR/VR environment and ¶80 discloses using virtual shopping on the electronic device for an appliance in the physical environment) Both Oser and Mani are directed to virtual reality shopping systems allowing a user to experience the functionality of a product prior to purchase. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and a reasonable expectation of success, to modify the system and method for providing interactive products for viewing by a user as provided by Oser, by incorporating the technique of using virtual representations of actual physical products for demonstration as provided by Mani, using known electronic interfacing and programming techniques. The modification merely substitutes one product for shopping demonstrations for another, yielding predictable results of providing interactivity in a simulated physical environment for a physical product prior to shopping. The modification results in an improved shopping experience by allowing for different types of products commonly desired by shoppers for a more versatile and useful experience. Regarding claim 14, the apparatus of claim 6 performs the method of claim 14 and as such claim 14 is rejected based on the same rationale as claim 6 set forth above. Regarding claim 8, Oser further discloses: Wherein the processor is configured to: execute the function of the virtual object in the virtual space, and execute the function based on the function information about the virtual object (Oser, Fig. 2D and 58: In some embodiments, the display of first software 211 is initiated by the salesperson to, for example, provide a demonstration of software that is capable of operating using laptop 206. For example, the salesperson can control aspects of the retail experience to initiate display of different products that are displayed in CGR environment 204 as virtual objects. Although laptop 206 is not operating first software 211 in the physical environment, device 200 displays first software 211 as a virtual object appearing on display screen 206-1 to provide, in CGR environment 204, the appearance of first software 211 operating on laptop 206. In some embodiments, first software 211 represents fully interactive software that is responsive to inputs provided using laptop 206. For example, the user can interact with laptop 206 (e.g., by typing on keyboard 206-2) in CGR environment 204. Device 200 detects the user's input on keyboard 206-2 (e.g., by identifying keyboard keys the user is contacting), and uses the detected inputs to interact with first software 211. In this way, the user can interact with physical laptop 206 in CGR environment 204 to conduct a full demonstration of the capabilities of first software 211 operating on laptop 206. ¶70: the virtual object (e.g., smartphone 220-1) representing the product is a virtual representation of the physical product (e.g., a virtual smartphone or virtual tablet) configured to perform the set of operations in response to detecting the first set of inputs directed to the virtual representation of the physical product; ¶¶79-80: while providing communication session, device detects input and in response to detecting input, modifies the virtual object representing the product in the computer-generated reality environment based on the detected input (e.g., modifying an appearance of the object (e.g., virtual object) based on the detected input).) Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Oser (US 2021/0004137 A1) in view of Mani et al. (US 2020/0117336 A1) and in further view of Todasco (US 2019/0272139 A1) Regarding claim 9, the limitations included from claim 6 are rejected based on the same rationale as claim 6 set forth above. Further regarding claim 6, Oser does not explicitly disclose the type of permission for executing functions of a virtual object as claimed. Todasco discloses: Wherein the memory further stores user identification information, and wherein the instructions when executed, further cause the image processing apparatus to: determine whether a user corresponding to the user identification information can display the virtual object, and wherein display of the virtual object based on the function information is performed only in a case where it is determined that the user can display the virtual object (Todasco, Abstract: controlling sharing of virtual object in virtual or augmented reality world; ¶15: user may also associate the virtual object with one or more accounts, such that other users with devices registered to the associated accounts may interact with the virtual object; ¶16: As such, a virtual object may be shared between the creator and designated users, but other non-designated users may not be able to interact with the virtual object or even know it exists; ¶25: the settings may designate one or more users with permission to view, detect, and/or manipulate the virtual object; ¶29: permission for a virtual object may depend on whether a user has certain combination of virtual objects in their account's inventory, whether the user has conducted certain actions, whether the user belongs to a certain group, and/or the like; ¶31: the server may store the existence of the virtual object in a database with associated settings and information for rendering by user devices, e.g. the virtual object may be associated with a 3-D model that may be rendered for viewing on a display; ¶38: the user device may receive virtual objects for rendering that are associated with the user account and not virtual objects that are not associated with the user account. In some embodiments, rendering a virtual object may depend on the permission settings of the virtual object in relation to the user account and/or device. In some examples, the user device may render virtual objects for display that are associated with the account and not objects not associated with the account. In some examples, the user settings may cause the virtual object to be seen and/or interacted with only by intended recipients.) Both Oser and Todasco are directed to systems for allowing distribution and interaction with virtual objects within augmented or virtual reality by a remote user. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and a reasonable expectation of success, to modify the system and method for providing interactive products for viewing by a user as provided by Oser, incorporating the technique of using virtual representations of actual physical products for demonstration as provided by Mani,, by incorporating the type of permission control over virtual objects as provided by Todasco, using known electronic interfacing and programming techniques. The modification results in an improved distributed interactive virtual object system by better controlling access to particular virtual objects only in situations where a user is an intended recipient, to better control the distribution of information based on an improved management of data. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Oser (US 2021/0004137 A1) in view of Mani et al. (US 2020/0117336 A1) in further view of Bromenshenkel et al. (US 2010/0198653 A1). Regarding claim 10, the limitations included from claim 6 are rejected based on the same rationale as claim 6 set forth above and incorporated herein. Further regarding claim 10, Oser does not explicitly disclose changing the display of the virtual object base don a priority registered previously by the user. Bromenshenkel teaches Wherein the processor is configured to change a display of the virtual object based on a priority registered previously by the user (Bromenshenkel, ¶6: s a computer-implemented method for presenting virtual objects to a user represented in a virtual environment using an avatar, including receiving an indication to present the user with a plurality of virtual objects, each virtual object having a relative priority of presentation; ¶17 discloses presenting virtual objects to user arranged with visibility arranged in priority; ¶18: past user interactions within the immersive virtual environment may be analyzed to determine which portions of the user's viewport, when including virtual objects that are offered for sale, are most likely to result in a sale to the user) Both Oser and Bromenshenkel are directed to virtual viewing of products to a user in an virtual environment. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and a reasonable expectation of success, to modify the system and method for providing interactive products for viewing by a user as provided by Oser, incorporating the technique of using virtual representations of actual physical products for demonstration as provided by Mani, by further providing the priority of displaying objects based on previous interactions as provided by Bromenshenkel, using known electronic interfacing and programming techniques. The modification results in an improved interactive virtual object system for presenting virtual objects to a user by arranging objects of more interest to a user with more visibility and accessibility to better tailor the presentation of virtual objects to user’s preferences and better catering to a user’s needs by making more relevant data easier to access. Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Oser (US 2021/0004137 A1) in view of Moyal et al. (US 2023/0196450 A1) and in further view of Wang et al. (US 2020/0342682 A1). Regarding claim 11, Oser discloses: An information processing system (Oser, ¶23 and ¶39 and Fig. 1B) comprising: An information storing apparatus (Oser, ¶38: remote server as base station device in which HMD is in communication with base station device; Fig. 1B and ¶39: communication between two devices, 100b and 100c) ¶¶57-58: retrieving details of laptop from database or accessing information from information network, where “the salesperson can control aspects of the retail experience to initiate display of different products that are displayed in CGR environment 204 as virtual objects”); wherein the information storing apparatus includes a processor and a memory, including instructions stored thereon (Oser, Fig. 1A and ¶37: memor(ies) 106; ¶41: memory(ies) 106 are one or more non-transitory computer-readable storage mediums (e.g., flash memory, random access memory) that store computer-readable instructions configured to be executed by processor(s) 102) and The information processing apparatus according to claim 1, (Oser, ¶39 and Fig. 1B, second device 100c, and further disclosed for same rationale as claim 1 set forth above in view of Oser and Moyal) Both Oser and Moyal are directed to virtual reality shopping systems allowing a user to experience the functionality of a product prior to purchase. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and a reasonable expectation of success, to modify the system and method for providing interactive products for viewing by a user as provided by Oser, by incorporating the technique of using product specification data for providing the functionality of a product for simulation as provided by Moyal, using known electronic interfacing and programming techniques. The modification merely substitutes one source of information regarding functionality of a device for another, yielding predictable results of obtaining data related to a product from product specifications as opposed to other sources for determining and providing simulated functionality. Moreover, the modification results in an improved shopping experience by ensuring a product functionality corresponds to the actual product using the product specification for more accurate demonstrations. Although it would be inherently required to store the function information related to the products at some point in the lifecycle of computer the technology as provided by Oser, the act of storing of the function information with the products is not clearly explained by Oser. Wang discloses: An information storing apparatus configured to store function information about a plurality of actual products, wherein the information storing apparatus includes a processor and a memory, including instructions stored thereon, which when executed cause the information storing apparatus to transmit, to the information processing apparatus, function information about one of the plurality of actual products, which corresponds to the virtual object (Wang, ¶23: The processing unit 13 is configured to identify the real object according to the scanning result of the scanning unit, determine at least one predetermined interactive characteristic according to an identification result of the processing unit, create a virtual object in a virtual environment corresponding to the real object in the real environment according to the scanning result of the scanning unit, and assign the at least one predetermined interactive characteristic to the virtual object in the virtual environment, so that the virtual object is allowed to be manipulated according to the at least one interactive input of the tracking result of the tracking unit when the at least one interactive input of the tracking result of the tracking unit meets the at least one predetermined interactive characteristic. The at least one predetermined interactive characteristic can be stored in a storage element, such as RAM, ROM or the like, of the head mounted display system 1. ¶25: HMD display system includes remote computing apparatus, such as remote server, such that identification result and at least one predetermined interactive characteristic can be transmitted between remote computing apparatus and wearable body via communication module, where processor can be on remote computing apparatus; Fig. 7 and ¶39 discloses assigning function to virtual laptop, including allowing top cover portion of virtual laptop to be manipulated to be opened by moving opening side of top portion of virtual laptop with user’s thumb) Both Oser and Wang are directed to computer graphic systems for providing an interactive experience with virtual representations of real objects to a user. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and a reasonable expectation of success, to modify the system and method for providing interactive products for viewing by a user as provided by Oser, by incorporating the technique of using product specification data for providing the functionality of a product for simulation as provided by Moyal, by storing functional information with virtual objects for presentation to a user as provided by Wang, using known electronic interfacing and programming techniques. The modification results in an improved interactive virtual object system by coordinating data in storage such that interactivity is associated with virtual objects for improved management of memory and easier accessibility for retrieving data for a user. Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Oser (US 2021/0004137 A1) in view of Moyal et al. (US 2023/0196450 A1) and Wang et al. (US 2020/0342682 A1) and in further view of Smyers et al. (US 2009/0138376 A1). Regarding claim 12, the limitations included from claim 11 are rejected based on the same rationale as claim 11 set forth above. Furthermore, Oser modified by Moyal and Wang further discloses: the information storing apparatus transmits the function information about the actual product corresponding to the virtual object, to the information processing apparatus (Oser, ¶38: remote server as base station device in which HMD is in communication with base station device; Fig. 1B and ¶39: communication between two devices, 100b and 100c; ¶¶57-58: retrieving details of laptop from database or accessing information from information network, where “the salesperson can control aspects of the retail experience to initiate display of different products that are displayed in CGR environment 204 as virtual objects”) Also Wang discloses: the information storing apparatus transmits the function information about the actual product corresponding to the virtual object, to the information processing apparatus (Wang, ¶23: The processing unit 13 is configured to identify the real object according to the scanning result of the scanning unit, determine at least one predetermined interactive characteristic according to an identification result of the processing unit, create a virtual object in a virtual environment corresponding to the real object in the real environment according to the scanning result of the scanning unit, and assign the at least one predetermined interactive characteristic to the virtual object in the virtual environment, so that the virtual object is allowed to be manipulated according to the at least one interactive input of the tracking result of the tracking unit when the at least one interactive input of the tracking result of the tracking unit meets the at least one predetermined interactive characteristic. The at least one predetermined interactive characteristic can be stored in a storage element, such as RAM, ROM or the like, of the head mounted display system 1. ¶25: HMD display system includes remote computing apparatus, such as remote server, such that identification result and at least one predetermined interactive characteristic can be transmitted between remote computing apparatus and wearable body via communication module; Fig. 7 and ¶39 discloses assigning function to virtual laptop, including allowing top cover portion of virtual laptop to be manipulated to be opened by moving opening side of top portion of virtual laptop with user’s thumb) Both Oser and Wang are directed to computer graphic systems for providing an interactive experience with virtual representations of real objects to a user. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and a reasonable expectation of success, to modify the system and method for providing interactive products for viewing by a user as provided by Oser, by storing functional information with virtual objects for presentation to a user as provided by Wang, using known electronic interfacing and programming techniques. The modification results in an improved interactive virtual object system by coordinating data in storage such that interactivity is associated with virtual objects for improved management of memory and easier accessibility for retrieving data for a user. Oser modified by Wang does not explicitly teach the search unit as claimed. Smyers discloses: Wherein the instructions when executed, further cause the information storing apparatus includes a search unit configure to search for the actual product corresponding to the virtual object, and wherein in a case where information storing apparatus searches for the actual product corresponding to the virtual object, the information storing apparatus transmits the information about the actual product corresponding to the virtual object, to the information processing apparatus (Smyers, ¶18: Product database 106 can store product descriptions, photos, videos, virtual reality representations or components of products, or any other suitable description for goods and/or services, where controller 104 can map one or more products found in product database 106 with user inputs view query input engine 102, and a resulting display includes a view of those products that match criteria provided by the user; Fig. 2 and ¶19: FIG. 2 shows an example product database search flow 200 in accordance with one embodiment. In determining which products to display on display screen 110, controller 104 can forward search criteria 202 to product database 106. Such search criteria can involve previously stored user preferences, as well as the query entered, and/or any other suitable command indications. Product database 106 can be searched using a software-based search algorithm and/or a hardware-based solution, such as a content-addressable memory (CAM) implementation using search criteria 202, or a portion or derivation thereof, as a search or lookup key.) Both Oser and Smyers are directed to providing an interactive retail experience within a computer system, providing virtual product information and graphics to online customers. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and a reasonable expectation of success, to modify the system and method for providing interactive products for viewing by a user as provided by Oser, by incorporating the technique of using product specification data for providing the functionality of a product for simulation as provided by Moyal, storing functional information with virtual objects for presentation to a user as provided by Wang, by further using the product data base architecture for organizing and storing data on products for customer retrieval as provided by Smyers, using known electronic interfacing and programming techniques. The modification results in an improved distributed interactive virtual object system providing an improved digital organization of product information within an easier to access and manage product database. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM A BEUTEL whose telephone number is (571)272-3132. The examiner can normally be reached Monday-Friday 9:00 AM - 5:00 PM (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DANIEL HAJNIK can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WILLIAM A BEUTEL/Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Nov 01, 2023
Application Filed
Oct 17, 2025
Non-Final Rejection — §102, §103
Dec 24, 2025
Response Filed
Feb 24, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12581262
AUGMENTED REALITY INTERACTION METHOD AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12572258
APPARATUS AND METHOD WITH IMAGE PROCESSING USER INTERFACE
2y 5m to grant Granted Mar 10, 2026
Patent 12566531
CONFIGURING A 3D MODEL WITHIN A VIRTUAL CONFERENCING SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12561927
MEDIA RESOURCE DISPLAY METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12554384
SYSTEMS AND METHODS FOR IMPROVED CONTENT EDITING AT A COMPUTING DEVICE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
90%
With Interview (+20.4%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 469 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month