Prosecution Insights
Last updated: April 17, 2026
Application No. 18/502,784

MIXED REALITY FOOD AND BEVERAGE APPARATUS

Final Rejection §103§112
Filed
Nov 06, 2023
Examiner
GE, JIN
Art Unit
2619
Tech Center
2600 — Communications
Assignee
unknown
OA Round
2 (Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
98%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
416 granted / 520 resolved
+18.0% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
38 currently pending
Career history
558
Total Applications
across all art units

Statute-Specific Performance

§101
9.0%
-31.0% vs TC avg
§103
60.2%
+20.2% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 520 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This is in response to applicant’s amendment/response filed on 01/02/2026, which has been entered and made of record. Claims 1-2, 10, 12, 16, and 20 have been amended. Claims 13-15 has been cancelled. Claims 1-12 and 16-20 are pending in the application. Response to Arguments Applicant's arguments filed on 01/02/2026 have been fully considered but they are not persuasive. Applicant submitted new amended claims. Accordingly, new grounds of rejection are set forth above. The new grounds of rejection conclusion have been necessitated by Applicant's amendments to the claims. The objection of claims 1, 16 , and 20 has been withdrawn after amendment. The rejection of claim 10 under 35 USC 112(b) has been withdrawn after amendment. Applicants state that “Applicant respectfully submits that the above amendments traverse all pending rejections. Ohta discloses only static information displayed via a head-mounted device. In contrast, the present invention, especially in light of the above amendments, is an interactive experience where there is not simply information displayed via an overlay, but a three-dimensional interactive AR experience that incorporates the real world trigger into that experience. This is in direct contrast to Ohta, which seeks to use an AR experience to overlay and hide the trigger. For instance, in paragraph 38, Ohta discloses that the AR image replaces or hides the actual trigger. The same is seen in paragraphs 48 and 52 of Ohta, where the AR image replaces the original trigger it does not augment it or interact with it. The present invention does not replace the trigger, it incorporates the trigger in an interactive experience. Further, Ohta would teach way. from an interactive experience, because the point of Ohta is to be able to hide, or replace, the real trigger. If Ohta were interactive with the real trigger, then it would reveal what Ohta is hiding behind the overlay, defeating the purpose of Ohta. Similarly, Isaacson has the same failings. Isaacson, at best, discloses passive display of set information. Issacson does not teach an interactive display that incorporates the trigger as part of an AR experience. Isaacson does not teach interacting with the AR display at all or having the user be able to change the AR display in any way. Isaacson would teach away from that idea because it is focused on purchasing and shopping. Isaacson would not want the user to be able to interact with or change the information, because that information is the price and availability of goods. Allowing the user to change that would defeat the purpose of Isaacson, since it would be offered by stores which would not want the user to be able to change the price. This is demonstrated in at least Isaacson paragraphs 14-15, 57, and 59”. The examiner disagrees. 1. Prior art rejection is claims only not invention. 2. Applicant did not raise any specific argument or evidence to support his conclusion. The Examiner directs Applicant to claim rejections for detailed analyses. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 2 and 12 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 2 recites limitation “wherein the AR image includes animated digital objects.” Claim 12 recites limitation “wherein the user action is changing the spatial orientation of the user device in respect to the identifier apparatus” The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor(s), at the time the application was filed, had possession of the claimed invention. This is a new matter rejection. Support was not found for these new limitations in the original specification. MPEP 2163 II A (b) states “To comply with the written description requirement of 35 U.S.C. 112, para. 1, or to be entitled to an earlier priority date or filing date under 35 U.S.C. 119, 120, or 365(c), each claim limitation must be expressly, implicitly, or inherently supported in the originally filed disclosure. When an explicit limitation in a claim “is not present in the written description whose benefit is sought it must be shown that a person of ordinary skill would have understood, at the time the patent application was filed, that the description requires that limitation.” Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3, 5-9, 11, and 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2019/0141021 to Isaacson et al. in view of U.S. PGPubs 2025/0104421 to Ohta et al. PNG media_image1.png 298 280 media_image1.png Greyscale Regarding claim 1, Isaacson et al. teach a system configured to enable display of an augmented reality (AR) image on a user device (Fig 2, par 0056, par 0058, par 0159), the system comprising: an identifier apparatus removably attached to a consumable item, wherein the identifier apparatus comprises a unique identifier associated with the consumable item (Fig 2, par 0013-0014, par 0058, “ The user may simply also pick products and place them in the bag and motion detection or near field communication based on a tag attached to each product can automatically identify which products the user is carrying around without the need to manually scan a visual pattern or to enter in the code”, par 0084, “As lunch is ordered, a disposable RFID tag, or similar tag 304, 308 can be printed or generated and stuck to a package. Rather than printing a receipt, the system could print an electronic tag that has a sticky side that is simply attached to a container of the food”, par 0262, “The camera on the device can be used to take an image associated with the product, wherein data includes the image and wherein the image is one of an image of the product and an image of a tag (having a bar code or other scannable image) associated with the product”….products with tag as identifier); an AR unit comprising (Fig 2, par 0058, par 0159, a server system): a transceiver configured to receive a command signal associated with the unique identifier from the user device when the user device performs a predefined action in proximity to the identifier apparatus (par 0064, “It is noted that the same automatic instructions can also be provided via a camera on the mobile device scanning a code. In other words, the code that is scanned can be modified to include the instructions to launch a browser, navigate to a URL, and established communication with a server. Thus, the user could scan a code as they enter the store as well to initiate the communication with the server which can launch the user interface for continuing to scan for products to purchase. Once the server responds with a user interface and capabilities to select products for purchase, later scans or camera images of codes associated with products can identify that the server is already accessed and the mobile device only needs the product information in the data received”, par 0070, par 0074, “the code that is scans can be modified to include the instructions to launch a browser, navigate to a URL, and established communication with a server. Thus, the user could scan a code, as they enter the store as well to initiate the communication with the server, and which can launch the user interface for continuing to scan for products to purchase. Once the server responds with a user interface and capabilities to select products for purchase, later scans or camera images of codes associated with products can identify that the server is already accessed and the mobile device only needs the product information in the data received”, par 0077, “Claims can cover a mobile device programmed or configured to receive a wireless instruction communication via a near field communication module (blue tooth, NFC, etc.), automatically launch a browser, populate a URL in the browser based on the data, establish a communication with a server and receive user interface data from the server The browser or app can access a camera or other component on the mobile device such that the server can provide instructions through the browser to control an interface to enable the user to walk around the store, and scan codes associated with the products that they desire to purchase. Codes can be typed in manually as well or provided via NFC, camera images, Bluetooth, RFID, photo identification of the product, or other mechanisms. As the user scans or inputs codes or identifies the products in other ways, the codes or product data (such as a picture) are transmitted back to the server to further process the purfor interacting with products for making purchases”, par 0081, “the user interface can be simple and enable the user to simply scan a code 304, 308 on respective products 302, 306. A camera or other communication component 208 can be used to receive data from the product 302, 306. In another aspect, machine learning could be used to enable the device 204 to simply take a picture of a product without a code and have that data communicated to the store server 218 to identify the product for purchase”, par 0217, “The browser or app can access a camera or other component on the mobile device such that the server can provide instructions through the browser to control an interface to enable the user to walk around the store, and scan codes associated with the products that they desire to purchase. Codes can be typed in manually as well or provided via NFC, camera images, Bluetooth, RFID, photo identification of the product, or other mechanisms. As the user scans or inputs codes or identifies the products in other ways, the codes or product data (such as a picture) are transmitted back to the server to further process the purchase of those products”….scan the tag and send tag information to server); a memory configured to store information associated with the consumable item (par 0217, “The camera on the device can be used to take an image associated with the product, wherein data includes the image and wherein the image is one of an image of the product and an image of a tag (having a bar code or other scannable image) associated with the product. Any such data can be coordinated with a product database at the store server for product identification. For example, where in image of the product is used, the store server can maintain various images of each of their products, such that the corresponding product can be identified from the image sent “, par 0262, “ The camera on the device can be used to take an image associated with the product, wherein data includes the image and wherein the image is one of an image of the product and an image of a tag (having a bar code or other scannable image) associated with the product. Any such data can be coordinated with a product database at the store server for product identification. For example, where in image of the product is used, the store server can maintain various images of each of their products, such that the corresponding product can be identified from the image sent”); and a processor communicatively coupled with the transceiver and the memory, wherein the processor is configured to (Fig 2, server): obtain the command signal and the information associated with the consumable item (par 0064, “It is noted that the same automatic instructions can also be provided via a camera on the mobile device scanning a code. In other words, the code that is scanned can be modified to include the instructions to launch a browser, navigate to a URL, and established communication with a server. Thus, the user could scan a code as they enter the store as well to initiate the communication with the server which can launch the user interface for continuing to scan for products to purchase. Once the server responds with a user interface and capabilities to select products for purchase, later scans or camera images of codes associated with products can identify that the server is already accessed and the mobile device only needs the product information in the data received”, par 0070, par 0074, “the code that is scans can be modified to include the instructions to launch a browser, navigate to a URL, and established communication with a server. Thus, the user could scan a code, as they enter the store as well to initiate the communication with the server, and which can launch the user interface for continuing to scan for products to purchase. Once the server responds with a user interface and capabilities to select products for purchase, later scans or camera images of codes associated with products can identify that the server is already accessed and the mobile device only needs the product information in the data received”, par 0077, “Claims can cover a mobile device programmed or configured to receive a wireless instruction communication via a near field communication module (blue tooth, NFC, etc.), automatically launch a browser, populate a URL in the browser based on the data, establish a communication with a server and receive user interface data from the server The browser or app can access a camera or other component on the mobile device such that the server can provide instructions through the browser to control an interface to enable the user to walk around the store, and scan codes associated with the products that they desire to purchase. Codes can be typed in manually as well or provided via NFC, camera images, Bluetooth, RFID, photo identification of the product, or other mechanisms. As the user scans or inputs codes or identifies the products in other ways, the codes or product data (such as a picture) are transmitted back to the server to further process the purfor interacting with products for making purchases”, par 0081, “the user interface can be simple and enable the user to simply scan a code 304, 308 on respective products 302, 306. A camera or other communication component 208 can be used to receive data from the product 302, 306. In another aspect, machine learning could be used to enable the device 204 to simply take a picture of a product without a code and have that data communicated to the store server 218 to identify the product for purchase”, par 0217, “The browser or app can access a camera or other component on the mobile device such that the server can provide instructions through the browser to control an interface to enable the user to walk around the store, and scan codes associated with the products that they desire to purchase. Codes can be typed in manually as well or provided via NFC, camera images, Bluetooth, RFID, photo identification of the product, or other mechanisms. As the user scans or inputs codes or identifies the products in other ways, the codes or product data (such as a picture) are transmitted back to the server to further process the purchase of those products”); generate the AR content based on the information associated with the consumable item responsive to obtaining the command signal, and transmit the AR content to the user device (par 0064, par 0074, “the code that is scans can be modified to include the instructions to launch a browser, navigate to a URL, and established communication with a server. Thus, the user could scan a code, as they enter the store as well to initiate the communication with the server, and which can launch the user interface for continuing to scan for products to purchase. Once the server responds with a user interface and capabilities to select products for purchase, later scans or camera images of codes associated with products can identify that the server is already accessed and the mobile device only needs the product information in the data received”, par 0159-0160, “the site may transfer some data associated with a product back through the API to the browser. A virtual reality engine, or an augmented reality engine on the device can receive the data associated with the product, and utilize that data to create a virtual reality or augmented reality experience”), wherein the AR image includes the consumable item along with interactive imagery capable of being interacted with by the user (Fig 3, par 0081-0083, “The user brings the mobile device 204 into a store and either manually, or in an automated fashion, a browser on the device 204 is populated with the URL to connect to a store server 218. The store server 218 can provide a user interface 318 (such as via a browser, or an application that is downloaded to the device) with instructions 310 regarding making purchases in the store. Initially, the user interface can be simple and enable the user to simply scan a code 304, 308 on respective products 302, 306. A camera or other communication component 208 can be used to receive data from the product 302, 306. In another aspect, machine learning could be used to enable the device 204 to simply take a picture of a product without a code and have that data communicated to the store server 218 to identify the product for purchase. The system could confirm with the user that a threshold has been met to properly identify each product. …. Several additional concepts can also be introduced by applying the enhanced capabilities of the user's mobile device for scanning for codes associated with products to be purchased, communicating with a store site 218 through a browser and making the payment through a browser-based API. For example, in restaurants, the user could walk in to a store such as Chipotle, order lunch, the store could attach a device on the lunch in some manner, such as on a container, and the user could just walk out of the store. The device 304, 308 could have encoded the cost of the meal and can transmit a signal which is received by the mobile device 204 to identify the cost, the meal, and initiate communication with the store server 218 to enable the user to make the purchase via a simplified browser-based payment process like Apple pay or Google pay” …..generate a interaction interface about a consumable item with extra information in user mobile device to allow user to interact with it such as purchase). But Isaacson et al. keep silent for teaching generate the AR image based on the information associated with the consumable item responsive to obtaining the command signal; and transmit the AR image to the user device. In related endeavor, Ohta et al. teach wherein the processor is configured to (Fig 11): generate the AR image based on the information associated with the consumable item responsive to obtaining the command signal; and transmit the AR image to the user device (par 0007, “detection means for detecting a predetermined detection target in an image that is obtained by capturing at least part of a field of view of a user and shows a food item; overlay area setting means for setting, in the image, an overlay area for information related to the food item by using the detection target as a reference; and display control means for performing overlay display of the information on the overlay area”, Fig 3, par 0034-0037, “The information processing apparatus 2 sets, by using the detected three-dimensional code OB1 as a reference, an overlay area in the image captured by the capturing section 22, and performs overlay display of information related to the food item on the set overlay area. An image 32 illustrated in FIG. 3 is obtained by setting, in the image 31, an area showing pieces of cooked chicken breast (steamed chicken) as an overlay area, and performing overlay display of an image of pieces of deep-fried chicken on the overlay area” ….generate AR image and transmit to HMD display). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Isaacson et al. to include generate the AR image based on the information associated with the consumable item responsive to obtaining the command signal; and transmit the AR image to the user device as taught by Ohta et al. to combines elements of virtual reality (VR), augmented reality (AR), and mixed reality (MR) to create immersive experiences that seamlessly blend digital and physical environments to significantly enhance the delivery of information, e.g., during a shopping experience, offering new ways for users to discover, evaluate, and interact with products. Regarding claim 3, Isaacson et al. as modified by Ohta et al. teach all the limitation of claim 1, and Isaacson et al. teach wherein the predefined action comprises scanning the unique identifier using a user device camera (par 0013-0014, “ an interface would automatically be presented with instructions regarding how to choose products, such as to scan a code using the camera on the phone or to simply hover their phone near a product to receive a near field communication identification of each respective product”, par 0064, “The user can then select products according to the available process such as by scanning a code using the camera on their mobile device or merely using communication technologies to bring their mobile device near a product such as that the electrical component attached that product provides the necessary data for adding the product to their shopping cart”). Regarding claim 5, Isaacson et al. as modified by Ohta et al. teach all the limitation of claim 1, and teach wherein the consumable item is at least one of a food item and a beverage item (Isaacson et al.: Fig 2, Ohta et al.: Fig 3, par 0037-0038). Regarding claim 6, Isaacson et al. as modified by Ohta et al. teach all the limitation of claim 1, and teach wherein the unique identifier is a Quick Response (QR) code (Isaacson et al.: Fig 3A, par 0013-0014, par 0081, Ohta et al.: Figs 3 and 6, par 0055-0056). Regarding claim 7, Isaacson et al. as modified by Ohta et al. teach all the limitation of claim 1, and teach wherein the unique identifier is an image tag (Isaacson et al.: par 0014, “The user may simply also pick products and place them in the bag and motion detection or near field communication based on a tag attached to each product can automatically identify which products the user is carrying around without the need to manually scan a visual pattern or to enter in the code”, par 0084, “similar tag 304, 308 can be printed or generated and stuck to a package”, par 0181, “This interface can apply to an in-store purchasing experience where an identification of the product is received at the mobile device (RFID tag, code scanned via a camera, a photo of the product, manual entry, etc.)”, par 0217, “The browser or app can access a camera or other component on the mobile device such that the server can provide instructions through the browser to control an interface to enable the user to walk around the store, and scan codes associated with the products that they desire to purchase. Codes can be typed in manually as well or provided via NFC, camera images, Bluetooth, RFID, photo identification of the product, or other mechanisms. As the user scans or inputs codes or identifies the products in other ways, the codes or product data (such as a picture) are transmitted back to the server to further process the purchase of those products”, Ohta et al.: Figs 3 and 6, par 0055-0056). Regarding claim 8, Isaacson et al. as modified by Ohta et al. teach all the limitation of claim 1, and Isaacson et al. teach wherein the unique identifier is at least one of a near field communication (NFC) transceiver, a radio frequency identification (RFID) transceiver, an ultra-wideband (UWB) transceiver and a Bluetooth low energy (BLE) transceiver (par 0217, “The browser or app can access a camera or other component on the mobile device such that the server can provide instructions through the browser to control an interface to enable the user to walk around the store, and scan codes associated with the products that they desire to purchase. Codes can be typed in manually as well or provided via NFC, camera images, Bluetooth, RFID, photo identification of the product, or other mechanisms”). Regarding claim 9, Isaacson et al. as modified by Ohta et al. teach all the limitation of claim 1, and Isaacson et al. teach wherein the AR unit is hosted on a server (Isaacson et al.: Fig 3A, par 0013-0014, par 0081, par 0112, par 0159). Regarding claim 11, Isaacson et al. as modified by Ohta et al. teach all the limitation of claim 1, and Singh et al. teach wherein the information associated with the consumable item comprises at least one of an ingredient list, a recipe, nutrition information, an ingredient origin and an art work associated with the consumable item (Fig 4, par 0069, par 0115-0116). Regarding claims 16-19, the method claims 16-19 are similar in scope to claims 1, 3, and 5-6 and are rejected under the same rational. Regarding claim 20, Isaacson et al. teach a non-transitory computer-readable storage medium in a distributed computing system, the non-transitory computer-readable storage medium having instructions stored thereupon which, when executed by a processor, cause the processor to (par 0243, par 0371). The remaining limitations of the claim are similar in scope to claim 1 and rejected under the same rationale. Claim(s) 2 and 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2019/0141021 to Isaacson et al. in view of U.S. PGPubs 2025/0104421 to Ohta et al., further in view of U.S. PGPubs 2020/0201513 to Malmed et al. Regarding claim 2, Isaacson et al. as modified by Ohta et al. teach all the limitation of claim 1, but keep silent for teaching wherein the AR image includes animated digital objects. In related endeavor, Malmed et al. teach wherein the AR image includes animated digital objects (par 0018, “The presentation includes visual media components (e.g., images) and/or audio media components. To generate images such as static or animated text and/or graphics, the example presentation generator 102 of FIG. 1 includes an image generator 106”, par 0040-0041, “ the image generator 106 can animate the graphic to indicate changes in relative location, such as pulsating the graphic as the presentation generator gets closer or moves further away, or changing the speed of that pulsating to indicate changes in relative location. As the presentation generator continually tracks the location of the RFID tag relative thereto, the presentation generator, in particular the map generator instructing the image generator, continually adjusts the size, location, color intensity, animations, etc. of the graphic to indicate changes in relative location”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Isaacson et al. as modified by Ohta et al. to include wherein the AR image includes animated digital objects as taught by Malmed et al. to generate an AR image to identify the location of the RFID tag to the user, in particular to identify the location of the RFID tag in an augmented reality display. Regarding claim 4, Isaacson et al. as modified by Ohta et al. teach all the limitation of claim 1, but keep silent for teaching wherein the predefined action comprises disposing the user device within a predefined distance from the identifier apparatus. In related endeavor, Malmed et al. teach wherein the predefined action comprises disposing the user device within a predefined distance from the identifier apparatus (par 0029, “The presentation generator 102 further includes an RFID reader 130 for identifying items of interest in an inventory environment, in particular, by identifying an RFID tag associated with each item of interest. The RFID reader 130 may include an RFID antenna, and the RFID reader 130 may be configured to emit, via the RFID antenna, a radiation pattern, where the radiation pattern is configured to extend over an effective reading range within an inventory environment to identify and read one or more RFID tags. In exemplary embodiments, the presentation generator 102 instructs the RFID reader 130 to identify only certain RFID tags, such as RFID tags corresponding to items identified by an external device or server 142”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Isaacson et al. as modified by Ohta et al. to include wherein the predefined action comprises disposing the user device within a predefined distance from the identifier apparatus as taught by Malmed et al. to generate an AR image to identify the location of the RFID tag to the user, in particular to identify the location of the RFID tag in an augmented reality display. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2019/0141021 to Isaacson et al. in view of U.S. PGPubs 2025/0104421 to Ohta et al., further in view of U.S. PGPubs 2022/0291806 to Dunn et al. Regarding claim 10, Isaacson et al. as modified by Ohta et al. teach all the limitation of claim 1, but keep silent for teaching wherein the AR image changes based on user action. In related endeavor, Dunnet al. teach wherein the AR image changes based on user action (par 0306-0308, “In FIG. 8AB, the configurator user interface 808 is dismissed, the style of the currently selected CGR object is changed to the teacup style as indicated by the change in appearance of the representation of the cup CGR object 806C and the change in the style affordance 813DBB, and the pattern of the currently selected CGR object is changed to the stars pattern as indicated by the change in appearance of the cup CGR object 806C and the change in the pattern affordance 813DBC. …..FIG. 8AC illustrates the CGR file composing user interface 801 in response to detecting the user input 899X directed to the representation of the cup CGR object 806C. In FIG. 8AC, the size of the cup CGR object is increased as indicated by the increased display size of the representation of the cup CGR object 806C (and, similarly, the change in the second CGR scene affordance 803BB)”, par 0310, par 0333, “while a CGR object is selected (and the first type of object selection indicator 807 is displayed), different types of user input directed to the representation of the CGR object results in different changes to spatial properties of the CGR object. For example, in FIGS. 8AB and 8AC, the user input 899X of a first type (e.g., a pinch) directed to the representation of the cup CGR object 806C changes a size of the cup CGR object. As another example, in FIGS. 8AC and 8AD, the user input 899Y of a second type (e.g., a rotate) directed to the representation of the cup CGR object 806C changes an orientation around a z-axis of the cup CGR object. As another example, in FIGS. 8AD and 8AF, the user input 899Z of a third type (e.g., a drag) directed to the representation of the cup CGR object 806C changes a location in an xy-plane of the cup CGR object. As another example, in FIGS. 8AH and 8AI, a user input 899AB of a fourth type (e.g., a tap) directed to the representation of the cup CGR object 806C changes the first type of object selection indicator 807 to a second type of object selection indicator 817, allowing various additional spatial manipulations as described below.”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Isaacson et al. as modified by Ohta et al. to include wherein the AR image changes based on user action as taught by Dunn et al. to change a spatial property of the particular CGR object based on the user input and the intuitive spatial manipulation point to generate high quality CGR images. Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2019/0141021 to Isaacson et al. in view of U.S. PGPubs 2025/0104421 to Ohta et al., further in view of U.S. PGPubs 2022/0291806 to Dunn et al., further in view of U.S. PGPubs 2009/0143877 to Panje. Regarding claim 12, Isaacson et al. as modified by Ohta et al. and Dunnet al. teach all the limitation of claim 10, but keep silent for teaching wherein the user action is changing the spatial orientation of the user device in respect to the identifier apparatus. In related endeavor, Dunnet al. teach wherein the user action is changing the spatial orientation of the user device in respect to the identifier apparatus (abstract, “a method of controlling a portable user device (100, 420), the method comprising the steps of (260) detecting a change of orientation of the portable user device, and (270) selecting, upon said detection of the orientation change, at least one command from a list of commands. The device may automatically personalize its user interface and provide, for instance, different functionalities to respective users”, Fig 3, par 0036, “In step 360, the initial orientation of the portable device is detected, and based on this, the first command to be selected from the list may be determined in step 370. The first command is the command associated with the detected initial orientation, as determined in step 370. After the first command has been found, the commands subsequent in the list are to be selected upon further changes of orientation of the device”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Isaacson et al. as modified by Ohta et al. Dunn et al. to include wherein the user action is changing the spatial orientation of the user device in respect to the identifier apparatus as taught by Dunn et al. to perform using the portable device incorporating user input means such as, device’s orientation, a keyboard, touch screen, pen-pointing device, voice recognition, or remote control. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jin Ge whose telephone number is (571)272-5556. The examiner can normally be reached 8:00 to 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571)272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JIN . GE Examiner Art Unit 2619 /JIN GE/Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Nov 06, 2023
Application Filed
Jun 29, 2025
Non-Final Rejection — §103, §112
Jan 02, 2026
Response Filed
Jan 26, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592024
QUANTIFICATION OF SENSOR COVERAGE USING SYNTHETIC MODELING AND USES OF THE QUANTIFICATION
2y 5m to grant Granted Mar 31, 2026
Patent 12586296
METHODS AND PROCESSORS FOR RENDERING A 3D OBJECT USING MULTI-CAMERA IMAGE INPUTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579704
VIDEO GENERATION METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12573164
DESIGN DEVICE, PRODUCTION METHOD, AND STORAGE MEDIUM STORING DESIGN PROGRAM
2y 5m to grant Granted Mar 10, 2026
Patent 12573151
PERSONALIZED DEFORMABLE MESH BY FINETUNING ON PERSONALIZED TEXTURE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
98%
With Interview (+18.0%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 520 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month