Prosecution Insights
Last updated: April 19, 2026
Application No. 18/335,559

PROTOTYPING APPLICATIONS OF SPATIALLY AWARE SMART OBJECTS USING AUGMENTED REALITY

Non-Final OA §103
Filed
Jun 15, 2023
Examiner
ALKHATEEB, NOOR
Art Unit
2193
Tech Center
2100 — Computer Architecture & Software
Assignee
City University Of Hong Kong
OA Round
3 (Non-Final)
53%
Grant Probability
Moderate
3-4
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 53% of resolved cases
53%
Career Allow Rate
63 granted / 119 resolved
-2.1% vs TC avg
Strong +54% interview lift
Without
With
+54.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
23 currently pending
Career history
142
Total Applications
across all art units

Statute-Specific Performance

§101
22.5%
-17.5% vs TC avg
§103
57.0%
+17.0% vs TC avg
§102
6.2%
-33.8% vs TC avg
§112
13.5%
-26.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 119 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is in response to the application filed on 01/29/2026. Claims 1, 3-5, 10-14, 17-27 are pending. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-7, 10, 12-17, 18-27 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mullins et al. (US 2017/0255450 A1) hereinafter Mullins in view of Piya et al. (US 10,579,207 B2) hereinafter Piya and further in view of Ramani et al. (US 2023/0038709 A1) hereinafter Ramani. Regarding claim 1, A system, comprising: Mullins discloses at least one memory that stores computer-executable components (Mullins Fig. 14 element 1404 storing instructions element 1424); and at least one processor that executes the computer-executable components stored in the at least one memory (Mullins Fig. 14 element 1402 executing instructions element 1424), wherein the computer-executable components comprise: a spatial detection component that determines and tracks spatial positions and orientations of one or more physical objects in association with moving the one or more physical objects within a real-world environment (Mullins [0027] discloses an optical sensor of a device captures an image at a first physical object and a second physical object. An augmented reality (AR) application in the device identifies the first and second physical objects and a physical state (e.g., orientation, position, location, context information) of the first and second physical objects. Mullins [0044] discloses The spatial cooperative programming language module 216 identifies and determines spatial aspects of real-world objects (e.g., size, location, orientation, placement, shape, color) or actions (e.g., motion or rotation in a direction or along an axis) that can be used to further enhance and visualize computer programming (e.g., visual highlights to indicate functions or operations, connectors such as arrows, lines, arcs, spatial arrangement of augmented or virtual reality information such as text or graphical symbols). For example, the spatial cooperative programming language module 216 creates a visual program to compare two physical objects by having the user identify the physical objects using augmented or virtual reality (e.g., the user picks up two physical objects, one in each hand) instead of typing or selecting the objects from a list displayed on a screen monitor.) as viewed through a display of an augmented reality device (Mullins [0098]- [0099] discloses augmented reality display rendering visual indicators associated with physical objects); an interface component comprising a visual programming user interface rendered on the display that facilitates defining spatial events and corresponding effects associated with the moving of one or more physical objects based on the moving (Mullins [0044]-[0048] disclose different example embodiments such as visual programming a virtual function associated with physical manipulations of the physical object. Mullins Fig. 11 illustrates this process of identifying a physical object and its state and receiving a selection of programming logic function. Mullins [0046] discloses the rendering module 214 renders a display of a virtual object (e.g., a door with a color based on the temperature inside the room as detected by sensors 202 from the mobile device 112) based on a three-dimensional model of the virtual object (e.g., 3D model of a virtual door) associated with the physical object 120 (e.g., a physical door). In another example, the rendering module 214 generates a display of the virtual object overlaid on an image of the physical object 120 captured by a camera of the mobile device 112. In the example of a see-through display, no image of the physical object 120 is displayed. The virtual object may be further manipulated (e.g., by the user 102) by moving the physical object 120 relative to the mobile device 112. Similarly, the display of the virtual object may be manipulated (e.g., by the user 102) by moving the mobile device 112 relative to the physical object 120. Thus, spatial events of changing the color of the door based on temperature and displaying the color using a virtual door of the physical door is similar to prototyping spatial events and corresponding events. Another example may be seen in Mullins [0064]), Mullins lacks explicitly wherein the interface component renders a first virtual proxy via the visual programming interface representative of a spatial event in response to reception of first user input, via the visual programming user interface, setting a defined spatial position or orientation of the one or more physical objects resulting from the moving as the spatial event, and wherein the interface component renders a second virtual proxy via the visual programming interface representative of an effect of the spatial event in response to reception of second user input, via the visual programming user interface, selecting the second virtual proxy to represent the effect; and a spatial event and effect creation component that generates an event-effect model for the one or more physical objects based on the defined spatial position or orientation and the effect, wherein the spatial event effect and creation component generates a mapping between the spatial event and the effect within the event-effect model based on reception of third user input via the visual programming user interface connecting the first virtual proxy to the second virtual proxy. Ramani teaches Piya teaches wherein the interface component renders a first virtual proxy via the [visual programming] interface representative of a spatial event corresponding to a defined spatial position or orientation of the one or more objects arrived to by the one or more objects in association with the moving, and wherein the interface component renders a second virtual proxy via the [visual programming] interface representative of an effect of the spatial event in response to reception of second user input, via the [visual programming] user interface, selecting the second virtual proxy to represent the effect (Piya [col. 17, lines 67] and [col. 18, lines 1-9] teaches Such constraint solving capabilities enable users to construct precise assemblies by approximately specifying the relative spatial configuration of the virtual components. In some examples, whenever two virtual components are brought into close proximity (e.g., by moving one or more proxies) in a way that meets specific geometric and spatial constraint thresholds, those virtual components can snap into appropriate positions relative to one another. Further, Piya [col. 18, lines 21-35] teach FIG. 17 shows an example of an automated snapping constraint. Here, two virtual components (bolt 1705 and hole 1710) are brought into close proximity, e.g., by moving bolt 1705 or hole 1710 using a proxy (not shown) as described above. The system identifies the possible assembly relationship between bolt 1705 and hole 1710 and infers the proximity as a user-intended modeling action. The bolt 1705 automatically snaps into the hole 1710 (view 1715). This permits assembling the bolt and the hole, where a user only brings the bolt 1705 close to the hole 1710. Upon doing so, the system recognizes the user's intent and precisely places the bolt 1705 inside the hole 1710. The virtual components used in the assembly can be, e.g., pre-existing (created from any shape modeling media) or created using the herein-Described shape modeling approaches.), It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Mullins to incorporate the teachings of Piya to “wherein the interface component renders a first virtual proxy via the [visual programming] interface representative of a spatial event corresponding to a defined spatial position or orientation of the one or more objects arrived to by the one or more objects in association with the moving, and wherein the interface component renders a second virtual proxy via the [visual programming] interface representative of an effect of the spatial event” in order to efficiently ease understanding of the end result through visual representation and increase user satisfaction. Ramani teaches an interface component comprising a visual programming user interface rendered on the display that facilitates defining spatial events and corresponding effects based on the moving occurring during operation of the visual programming user interface in a creation mode (Ramani teaches the Creation Mode, the AR system 20 enables the user to create, modify, and view virtual assets, such as virtual objects, that can be interacted with in the freehand interactive AR application. In the Authoring Mode, the AR system 20 enables the user to define freehand gestures for interacting with the virtual objects and define animations and other actions for the virtual objects. In the Authoring Mode, the AR system 20 further enables the user to define interactions by combining the defined gestures and other predefined triggers with corresponding animations and other predefined actions. Where the creation mode and authoring mode together of Ramani are conceptually similar to the creation mode of the instant application) a spatial event and effect creation component that generates an event-effect model for the one or more physical objects based on the defined spatial position or orientation and the effect (Ramani [0074] teaches The trigger-action programming model of the Authoring Mode defines an interaction depending on two components, an input that is initiated by a subject of the interaction, and an output that is generated by an object of the interaction in response to the input.), wherein the spatial event effect and creation component generates a mapping between the spatial event and the effect within the event-effect model based on reception of third user input via the visual programming user interface connecting the first virtual proxy to the second virtual proxy (Ramani [0087] teaches The last step of the authoring process is to connect the triggers and actions, which can be done simply in a drag-and-drop manner within the AR graphical user interface. For example, to make the connection, the user first pinches the solid triangle icon of a trigger and picks up a connection line from it. Then the user approaches the hollow triangle icon of an action and releases the fingers to drop the connection line on the hollow triangle. In one embodiment, once the user finishes connecting the triggers and actions, the AR graphical user interface color-codes the connection for each interaction with a different color to better visualize the logic.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Mullins to incorporate the teachings of Ramani to “an interface component comprising a visual programming user interface rendered on the display that facilitates defining spatial events and corresponding effects based on the moving occurring during operation of the visual programming user interface in a creation mode; a spatial event and effect creation component that generates an event-effect model for the one or more objects based on the defined spatial position or orientation and the effect, wherein the spatial event effect and creation component generates a mapping between the spatial event and the effect within the event-effect model based on reception of user input via the visual programming user interface connecting the first virtual proxy to the second virtual proxy” in order to efficiently permit live programming, ease debugging and increase developer satisfaction developing an AR application. Further, The AR application authoring system 10 improves the efficiency in thought delivery by incorporating embodied hand gestures with in-situ visual representations of dynamic 3D assets and animations in AR (Ramani [0122]). Regarding claim 3, The system of claim 1, wherein the spatial detection component determines and tracks the spatial positions and orientations based on sensor data captured by one or more sensors that are part of, or communicatively coupled to, the augmented reality device (Mullins [0036] discloses In one example embodiment, the computing resources of the server 110 may be used to determine and render virtual objects based on the tracking data (generated on the mobile device 112 using data from sensors 202 of the mobile device 112 or generated on the server 110 using data from stationary sensors 118).). Regarding claim 4, The system of claim 3, wherein the spatial detection component determines and tracks the spatial positions and orientations using one or more spatial positioning markers associated with the one or more physical objects and a spatial positioning marker-based tracking process (Mullins [0043] discloses the AR application 212 identifies a visual reference (e.g., a logo or QR code) on the physical object 120 and tracks the location of the visual reference within the display 204 of the mobile device 112. The visual reference may also be referred to as a marker and may consist of an identifiable image, symbol, letter, number, machine-readable code. For example, the visual reference may include a bar code, a quick response (QR) code, or an image that has been previously associated with the virtual object ). Regarding claim 5, Mullins discloses The system of claim 1, wherein the real-world environment comprises an environment within a field-of-view of the display (Mullins [0056] discloses Each physical object 120 and 122 can have a unique shape (within the field of view of the sensors 202). Mullins [0070] discloses The camera 220 captures an image of physical objects 120, 122 within a field of view 224. Which is real-world environment Mullins [0031]), Mullins lacks and wherein the interface component renders the visual programming user interface via the display in association with the moving of the one or physical more objects. Ramani teaches and wherein the interface component renders the visual programming user interface via the display in association with the moving of the one or more physical objects (Ramani Fig. 4 and [0054] teaches In at least some embodiments, the AR graphical user interfaces include an interactive menu via which the user can navigate between the modes of the AR system 20 and utilized the available functionality of each mode. FIG. 4 shows, in illustration (a), an exemplary menu 400 for navigating between the Creation Mode, the Authoring Mode, and the Play Mode of the AR system 20. In some embodiments, the menu 400 is a floating menu that superimposed upon environment within the AR graphical user interfaces and may, for example, float next to a left hand or non-dominant hand of the user.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Mullins to incorporate the teachings of Ramani to “wherein the interface component renders the visual programming user interface via the display in association with the moving of the one or more objects” in order to efficiently permit live programming and increase developer satisfaction. Regarding claim 10, Mullins in view of Ramani combination teaches The system of claim 1, the combination lacks explicitly wherein the second virtual proxy comprises a virtual asset representative of the effect and wherein the visual programming user interface enables control of a rendering position or a behavior of the virtual asset via the display in response to a detection of the spatial event. Piya teaches wherein the second virtual proxy comprises a virtual asset representative of the effect and wherein the [visual programming] user interface enables control of a rendering position or a behavior of the virtual asset via the display in response to a detection of the spatial event (Piya Fig. 17 illustrates the “snap” once detection of the spatial event between the virtual bolt and virtual hole is detected). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination to incorporate the teachings of Piya to “wherein the second virtual proxy comprises a virtual asset representative of the effect and wherein the visual programming user interface enables control of a rendering position or a behavior of the virtual asset via the display in response to a detection of the spatial event” in order to efficiently and continuously permit updates to assets based on user needs and increase user satisfaction. Regarding claim 12, Mullins teaches The system of claim 1, wherein the moving corresponds to a first moving of the one or more physical objects performed in association with the creation mode, and wherein the computer-executable components further comprise: Ramani further teaches a testing component that executes a testing mode of the visual programming user interface using the augmented reality device (Ramani [0103] teaches In response to the user selecting the “Test” option in the main menu column 404 of the menu 400, the AR system 20 enters the Play Mode and displays corresponding AR graphical user interfaces on the display 28. The AR graphical user interfaces of the Play Mode enable the user to test the authored the freehand interactive AR application, including the defined interactions. Particularly, in the Play Mode, the user can try out the programmed AR interactions on-the-fly.), wherein the testing mode facilitates testing of the spatial event and the effect in accordance with the event-effect model in association with a second moving of the one or more physical objects within the real-world environment as viewed through the display (Ramani [0103] teaches In response to the user selecting the “Test” option in the main menu column 404 of the menu 400, the AR system 20 enters the Play Mode and displays corresponding AR graphical user interfaces on the display 28. The AR graphical user interfaces of the Play Mode enable the user to test the authored the freehand interactive AR application, including the defined interactions. Particularly, in the Play Mode, the user can try out the programmed AR interactions on-the-fly. Ramani [0051] teaches Additionally, various AR graphical user interfaces are described for operating the AR system 20. In many cases, the AR graphical user interfaces include graphical elements that are superimposed onto the user's view of the outside world or, in the case of a non-transparent display screen 28, superimposed on real-time images/video captured by the camera 29. In order to provide these AR graphical user interfaces, the processor 25 executes instructions of the AR graphics engine 34 to render these graphical elements and operates the display 28 to superimpose the graphical elements onto the user's view of the outside world or onto the real-time images/video of the outside world.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination to incorporate the teachings of Ramani to “a testing component that executes a testing mode of the visual programming user interface using the augmented reality device, wherein the testing mode facilitates testing of the spatial event and the effect in accordance with the event-effect model in association with a second moving of the one or more objects within the field-of-view of the display” in order to efficiently determine errors and continuously improve the system by fixing errors as soon as they are found. Regarding claim 14, it’s directed to a method having similar limitations cited in claim 1. Thus claim 14 is also rejected under the same rationale as cited in the rejection of claim 1 above. Regarding claim 17, it’s directed to a method having similar limitations cited in claim 10. Thus claim 17 is also rejected under the same rationale as cited in the rejection of claim 10 above. Regarding claim 18, it’s directed to a method having similar limitations cited in claim 12. Thus claim 18 is also rejected under the same rationale as cited in the rejection of claim 12 above. Regarding claim 19, it’s directed to a non-transitory machine-readable medium having similar limitations cited in claims 1 and 5. Thus claim 19 is also rejected under the same rationale as cited in the rejection of claims 1 and 5 above. Regarding claim 20, it’s directed to a non-transitory machine-readable medium having similar limitations cited in claim 3. Thus claim 20 is also rejected under the same rationale as cited in the rejection of claim 3 above. Regarding claim 21, it’s directed to a non-transitory machine-readable medium having similar limitations cited in claim 12. Thus claim 21 is also rejected under the same rationale as cited in the rejection of claim 12 above. Regarding claim 13, Mullins teaches The system of claim 12, Mullins lacks explicitly wherein the spatial detection component determines and tracks updated spatial positions and orientations of the one or more physical objects in association with the second moving, and wherein the interface component renders, via the display and in accordance with the event-effect model, a virtual asset corresponding to the second virtual proxy in response to a detection of the spatial event by the spatial detection component based on an updated spatial position or orientation of the updated spatial positions and orientations corresponding to the defined spatial position or orientation. Ramani teaches d wherein the spatial detection component determines and tracks updated spatial positions and orientations of the one or more physical objects in association with the second moving (Ramani [0053] teaches Finally, various forms of motion tracking are described in which spatial positions and motions of the user or of other objects in the environment are tracked. In order to provide this tracking of spatial positions and motions, the processor 25 executes instructions of the freehand interactive AR application authoring program 33 to receive and process sensor data from any suitable combination of the sensors 30 and the camera 29, and may optionally utilize visual and/or visual-inertial odometry methods such as simultaneous localization and mapping (SLAM) techniques. Ramani [0057] teaches Particularly, the processor 25 tracks a position of the hand of the user (e.g., a right hand or dominant hand of the user) or a position of a particular finger (e.g., index finger on the right hand or dominant hand of the user). As the user moves his or her hand or finger along or near a surface of the real-world object, the processor 25 scans the region of the real-world object 12 that is near the position of the hand or finger, and adds corresponding information (e.g., pieces of the polygon mesh) to the model of the virtual object.); and wherein the interface component renders, via the display and in accordance with the event-effect model, a virtual asset corresponding to the second virtual proxy in response to a detection of the spatial event by the spatial detection component based on an updated spatial position or orientation of the updated spatial positions and orientations corresponding to the defined spatial position or orientation (Ramani Fig. 15B illustrates the rendered virtual asset in response to new spatial position/orientation). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Mullins to incorporate the teachings of Ramani to “determining and tracking new spatial positions and orientations of the one or more objects in association with the second moving; and rendering, via the display in accordance with the event-effect model, a virtual asset corresponding to the second virtual proxy in response to a detection of the spatial event based on a new spatial position or orientation corresponding to the defined spatial position or orientation.” in order to efficiently permit live programming, ease debugging and increase developer satisfaction developing an AR application. Further, The AR application authoring system 10 improves the efficiency in thought delivery by incorporating embodied hand gestures with in-situ visual representations of dynamic 3D assets and animations in AR (Ramani [0122]). Regarding claim 22, it’s directed to a non-transitory machine-readable medium having similar limitations cited in claim 13. Thus claim 22 is also rejected under the same rationale as cited in the rejection of claim 13 above. Regarding claim 23, it’s directed to a method having similar limitations cited in claim 13. Thus claim 23 is also rejected under the same rationale as cited in the rejection of claim 13 above. Regarding claim 24, the combination teaches The system of claim 1, Ramani further teaches wherein the spatial event represented by the first virtual proxy corresponds to a selected spatial event type from a predefined set of spatial event types, each spatial event type defining a spatial state or spatial relationship of a single physical object (Ramani [0073] teaches In addition to recorded animations, the Authoring Mode also provides several predefined actions that can be selected by the user. Particularly, a following action 728 receives a value from a trigger indicating the transformation of the user's hands or of another virtual object and causes a particular virtual object to maintain a constant relative transformation and/or pose with respect to the user's hands or the other virtual object, so as to “follow” the user's hands or the other virtual object. Where Ramani [0121] gives an example with respect to a physical object) or two or more physical objects (No rejection required due to “or” language), the spatial event types comprising at least one of a position-based event (Ramani [0073] teaches An appear/disappear action 732 receives a signal from a trigger indicating that some triggering event has occurred and causes a particular virtual object to appear in the environment or disappear from the environment. A mesh explosion action 736 receives a signal from a trigger indicating that some triggering event has occurred and causes a particular virtual object to explode or disintegrate using a predefined explosion or disintegration animation. A mesh deformation action 740 receives a signal from a trigger indicating that some triggering event has occurred and causes a particular virtual object to deform in a predefined or user defined manner), an orientation-based event, a distance-based event, or a relative spatial relationship event (No rejection required due to “or” language). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Mullins to incorporate the teachings of Ramani to “wherein the spatial event represented by the first virtual proxy corresponds to a selected spatial event type from a predefined set of spatial event types, each spatial event type defining a spatial state or spatial relationship of a single physical object, the spatial event types comprising at least one of a position-based event” in order to efficiently permit live programming, ease debugging and increase developer satisfaction developing an AR application. Further, The AR application authoring system 10 improves the efficiency in thought delivery by incorporating embodied hand gestures with in-situ visual representations of dynamic 3D assets and animations in AR (Ramani [0122]). Regarding claim 25, Mullins discloses The system of claim 1, wherein the defined spatial position or orientation specifies a spatial relationship between at least two physical objects, the spatial relationship comprising at least one of a relative position (Mullins [0045] discloses The rendering module 214 dynamically renders the augmented or virtual reality information based on the relative position of the physical objects. For example, the rendering module 214 dynamically adjusts a display of the connector (e.g., virtual arrow or visual link) between two physical objects even when the location or position of the physical objects has moved.), a relative orientation, or a distance between the at least two physical objects (No rejection required due to “or” language). Regarding claim 26, the combination teaches The system of claim 1, Piya further teaches wherein the spatial event corresponds to a discrete spatial event, and wherein the event-effect model specifies the effect is triggered upon satisfaction of the placement of at least one of the one or more [physical] objects at the defined spatial position or orientation (Piya [col. 18, lines 21-35] teach FIG. 17 shows an example of an automated snapping constraint. Here, two virtual components (bolt 1705 and hole 1710) are brought into close proximity, e.g., by moving bolt 1705 or hole 1710 using a proxy (not shown) as described above. The system identifies the possible assembly relationship between bolt 1705 and hole 1710 and infers the proximity as a user-intended modeling action. The bolt 1705 automatically snaps into the hole 1710 (view 1715). This permits assembling the bolt and the hole, where a user only brings the bolt 1705 close to the hole 1710. Upon doing so, the system recognizes the user's intent and precisely places the bolt 1705 inside the hole 1710. The virtual components used in the assembly can be, e.g., pre-existing (created from any shape modeling media) or created using the herein-Described shape modeling approaches). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Mullins to incorporate the teachings of Piya to “wherein the spatial event corresponds to a discrete spatial event, and wherein the event-effect model specifies the effect is triggered upon satisfaction of the placement of at least one of the one or more [physical] objects at the defined spatial position or orientation” in order to efficiently validate intent of move and prevent system failure. Regarding claim 27, the combination teaches The system of claim 1, Ramani further teaches wherein the spatial event corresponds to a continuous spatial event, and wherein the event-effect model specifies the effect varies in response to changes in the spatial position or orientation (Ramani [0094] teaches The workflow 300 may include associating a static input with a continuous output to define a manipulating interaction (block 336). Manipulating interactions are those in which virtual assets continuously react to a static input. The most common scenario, a virtual object maintains a constant relative transformation and/or pose with respect to the user's hands or the other virtual object, so as to “follow” the user's hands or the other virtual object. As mentioned above, a manipulating interaction can be defined by combining a Static input with a Continuous output. Thus, a manipulating interaction can be formed using the visual programming interface of the AR graphical user interface to connect (1) a static gesture trigger, which provides a value indicating the transformation of the user's hands or of another virtual object and (2) a following action, which receives the value indicating the transformation of the user's hands or of another virtual object.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Mullins to incorporate the teachings of Ramani to “wherein the spatial event corresponds to a continuous spatial event, and wherein the event-effect model specifies the effect varies in response to changes in the spatial position or orientation” in order to improve the efficiency in thought delivery by incorporating embodied hand gestures with in-situ visual representations of dynamic 3D assets and animations in AR (Ramani [0122]). Claims 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mullins et al. (US 2017/0255450 A1) hereinafter Mullins in view of Ramani et al. (US 2023/0038709 A1) hereinafter Ramani and further in view of Newberg et al. (US 12,020,348 B2) hereinafter Newberg. Regarding claim 11, the combination teaches The system of claim 10, the combination lacks explicitly wherein the visual programming user interface facilitates at least one of selecting the virtual asset from a group of predefined virtual assets or creating the virtual asset using one or more virtual asset creation tools. Newberg further teaches wherein the visual programming user interface facilitates at least one of selecting the virtual asset from a group of predefined virtual assets or creating the virtual asset using one or more virtual asset creation tools (Newberg [col. 4, lines 34-53] teaches “In FIG. 1, digital asset 122 is a flower that can be customized in accordance with the functionality programmed into the flower by a digital asset creator (e.g., the developer of AR effect 114). For example, a user may be able to change the size, color, transparency, and physical features of the flower in screen 120 (i.e., digital asset 122). In this case, for instance, such changes can include adding more flowers, removing petals, enlarging the petals, reducing the size of the core, adding leaves, etc. The customization capabilities are unlimited as the digital asset creator may even program digital asset 122 to change in accordance with user usage of application 102 (e.g., how often the user accesses features of application 102 dictates the size/color of the flower). In screen 130, for example, user 104 has adjusted digital asset 122 by adding additional flowers. This modification may occur by tapping on the flower to add new flowers or by following prompts on the AR application that indicate how to add flowers. Consequently, in this case, digital asset 132 is the modified version of digital asset 122.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination to incorporate the teachings of Newberg to “wherein the virtual programming user interface facilitates at least one of selecting the virtual asset from a group of predefined virtual assets or creating the virtual asset using one or more virtual asset creation tools” in order to efficiently and continuously permit updates to assets based on user needs and increase user satisfaction. Response to Arguments Applicant’s arguments with respect to claim(s) 1, 3-5, 10-14, 17-23 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument and/or has been cured by a new reference. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Noor Alkhateeb whose telephone number is (313)446-4909. The examiner can normally be reached Monday-Friday from 9:00AM ET to 5:00PM ET. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chat do, can be reached at telephone number (571) 272-3721. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center or Private PAIR to authorized users only. Should you have questions about access to Patent Center or the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /NOOR ALKHATEEB/Primary Examiner, Art Unit 2193
Read full office action

Prosecution Timeline

Jun 15, 2023
Application Filed
Apr 05, 2025
Non-Final Rejection — §103
Jun 02, 2025
Applicant Interview (Telephonic)
Jun 02, 2025
Examiner Interview Summary
Jul 21, 2025
Response Filed
Nov 23, 2025
Final Rejection — §103
Jan 29, 2026
Request for Continued Examination
Feb 08, 2026
Response after Non-Final Action
Mar 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602205
EXTERNALLY-INITIATED RUNTIME TYPE EXTENSION
2y 5m to grant Granted Apr 14, 2026
Patent 12596532
Workflow Creation Method And Apparatus
2y 5m to grant Granted Apr 07, 2026
Patent 12535998
DYNAMIC IMPORTATION OF EXTERNAL DEPENDENCY INFORMATION TO SUPPORT AUTOCOMPLETION IN AN INTERACTIVE DEVELOPMENT ENVIRONMENT
2y 5m to grant Granted Jan 27, 2026
Patent 12517716
VEHICLE MASTER DEVICE, VEHICLE ELECTRONIC CONTROL SYSTEM, CONFIGURATION SETTING INFORMATION REWRITE INSTRUCTION METHOD, AND CONFIGURATION SETTING INFORMATION REWRITE INSTRUCTION PROGRAM PRODUCT
2y 5m to grant Granted Jan 06, 2026
Patent 12511114
SERVER, STORAGE MEDIUM, AND SOFTWARE UPDATE METHOD
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
53%
Grant Probability
99%
With Interview (+54.2%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 119 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month