Prosecution Insights
Last updated: April 19, 2026
Application No. 18/444,301

Devices, Methods, and Graphical User Interfaces for Interacting with Three-Dimensional Environments

Non-Final OA §103
Filed
Feb 16, 2024
Examiner
AUGUSTINE, NICHOLAS
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
Apple Inc.
OA Round
3 (Non-Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
3y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
596 granted / 814 resolved
+18.2% vs TC avg
Strong +28% interview lift
Without
With
+27.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
44 currently pending
Career history
858
Total Applications
across all art units

Statute-Specific Performance

§101
9.6%
-30.4% vs TC avg
§103
36.2%
-3.8% vs TC avg
§102
50.1%
+10.1% vs TC avg
§112
2.3%
-37.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 814 resolved cases

Office Action

§103
DETAILED ACTION A. This action is in response to the following communications: Request for Continued Examination filed 01/13/2026. B. Claims 1-29 remains pending. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/13/2026 has been entered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Agarawala, Anand et al. (US Pub. 2019/0313059 A1), herein referred to as “Agarawala” in view of Mueller, George G. et al. (US Pub. 2005/0275626 A1), herein referred to as “Mueller” in further view of Beall, Andrew C. et al. (US Pat. 10,403,050 B1), herein referred to as “Beall”. As for claims 1, 25 and 26, Agarawala teaches. A method and corresponding a first computer system of claim 25 and corresponding computer readable storage medium of claim 26, comprising: a first display generation component; one or more input devices; one or more processors; and memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs including instructions for , comprising: at a first computer system that is in communication with a first display generation component and one or more input devices (par. 5 illustrate example usages, operations, structures, methods, systems, combinations and sub-combinations related to providing an augmented reality (AR) and/or virtual reality (VR) computing environments): displaying, via the first display generation component, a view of a communication session between a first user of the first display generation component and a second user of a second display generation component that is different from the first display generation component (par. 8 the AR/VR system may be used to generate a shared experience or interface between multiple users, spanning one or more geographic areas, allowing each user access to the same data regardless of on which device the data may be stored, operating, or from which device the data is retrieved), wherein the view of the communication session includes a view of a three-dimensional environment that includes at least some virtual content that is shared between the first user and the second user, wherein displaying the view of the three-dimensional environment of the communication session includes displaying a respective representation of the second user in the view of the three-dimensional environment, and wherein the respective representation of the second user is determined based on a virtual spatial relationship between the first user and the second user in the three-dimensional environment (fig. 10b-11; a user may have logged into the system and may want to join a meeting in progress. The various workspaces may be those meetings or workspaces to which the user has been invited, is authorized to join, or that is associated with a user account. Then, for example, the user may join any of the AR and/or VR sessions in progress and see and interact with the other users and data of the sessions that are already in progress); while displaying the view of the communication session, displaying, via the first display generation component, a user interface for controlling the communication session, wherein the user interface for controlling the communication session includes a first control object that, when activated by the first user, causes the first computer system to perform a respective operation that modifies an appearance of a three-dimensional region of the three-dimensional environment; while displaying the view of the three-dimensional environment, detecting a first user input that actives the first control object (par. 59 users may be represented by avatars in either AR or VR environments. As shown in the example of FIG. 11, both users (who are represented by their avatars) may view an enlarged display element which may be placed against a wall in the room); and in response to detecting the first user input that activates the first control object: modifying the appearance of the three-dimensional region of the three-dimensional environment for the first user of the first display generation component (par. 133-135 user D interacts as an avatar as seen by Users A, B and C to manipulate element D within the shared spatial environment AR/VR); and initiating a process for the appearance of the three-dimensional region of the three-dimensional environment displayed at the second display generation component to be modified for the second user of the second display generation component (par. 63 using the smartphone's touch screen, the smartphone user may interact with AR display elements that are visible on the smart phone. FIG. 12B illustrates an example of how a user who on an AR/VR enabled device, such as a laptop, may have access to the same environment. In either scenario, the mobile device or laptop user may also drag and drop files from their local machines into the AR workspace so that other users in the AR workspace have access to or can see the files). Agarawala does not specifically teach that lights are organized into groups and controlled as a group only that virtual lights exists can be placed in virtual environment; however in the same field of endeavor Mueller provides a system that allow for finite control of virtual lighting that can effect real world lighting; in such Mueller teaches in accordance with a determination that the first control object corresponds to a first set of virtual augmentations including first virtual lighting, displaying the three- dimensional region of the three-dimensional environment with the first virtual lighting: and in accordance with a determination that the first control object corresponds to a second set of virtual augmentations including second virtual lighting, different from the first virtual lighting, displaying the three-dimensional region of the three-dimensional environment with the second virtual lighting (par. 213 a virtual reality user interface is provided for user to interact with virtual lighting and visualize what it would look like in a virtual room based upon a real world room; par. 274 discusses first, second to the nth groupings of lights for the user to assign and control via user interface). It would have been obvious to one of ordinary skill in the art at the time of filing the claimed invention to combine Mueller into Agarawala because Mueller suggests in paragraph 24, creating coordinated lighting effects presents many challenges, particularly in how to create complex effects that involve multiple lighting units in unusual geometries. A need exists for improved systems for creating and deploying lighting shows. A need also exists for improved systems for allowing users to create and modify lighting effects in real-time, such as during audio/visual performances that have a lighting component. Agarawala in view of Muller does not suggest changing from a first to second different scenery having different lighting; however in the same field of endeavor Beall teaches in accordance with a determination that the first control object corresponds to a first set of virtual augmentations including a first immersive experience having first virtual lighting and first virtual scenery, displaying, in the view of the communication session, the three- dimensional region of the three-dimensional environment with the first virtual lighting and the first virtual scenery; and in accordance with a determination that the first control object corresponds to a second set of virtual augmentations including a second immersive experience having second virtual lighting and second virtual scenery, different from the first immersive experience having first virtual lighting and first virtual scenery, displaying, in the view of the communication session, the three-dimensional region of the three-dimensional environment with the second immersive experience having the second virtual lighting and the second virtual scenery; and initiating a process for the appearance of the three-dimensional region of the three-dimensional environment displayed at the second display generation component to be modified, including modifying an immersive experience having virtual lighting and virtual scenery, for the second user of the second display generation component. (col. 14, lines 1-16; col. 4, lines 30-38; col. 14, lines 33-41; fig. 15 and col. 17, line 54 – col. 19, line 32 which teaches a presentation designer home screen that allows for a presenter user to display a navigation control for changing worlds and in which these worlds can be designed and shared collaboratively with a second to the nth user in a shared communication session; fig. 6 shows the navigation controller for changing worlds, fig. 9 is one world and fig. 14 is a different world having different virtual scenery and virtual lighting all of which can be designed in the presentation designer home screen as depicted in fig. 36; presentation of other avatars in shared session depicted in fig. 15). It would have been obvious to one of ordinary skill in the art before the effective filing date to combine Beall into Agarawala as modified by Muller because Beall suggests in column one “Augmented Reality (AR) generally refers to a computer simulated environment combined with the real world. Conventionally, the elements of the real world are augmented with computer generated graphics. Often, translucent stereoscopic headsets are worn by the user in AR simulations to enable a wearer to view the real world through the headset while also being able to view computer generated graphics. Movement of participants and/or objects in interactive VR and AR simulations may be tracked using various methods and devices. Multi-user environments greatly expands the potential of virtual environments by allowing two or more users to inhabit the same virtual world. As such, people can socialize, play games, or otherwise interact in an entirely digital/virtual context. Moreover, a user can be in a completely different part of the planet and still experience the “physical” presence of another participant.” As for claim 2, Agarawala teaches. The method of claim 1, including: while displaying the view of the communication session, displaying respective user interfaces of one or more applications that are shared in the communication session (par. 90 desk top applications, files, images are displayed in a 3D space in a layout that can be interacted with and changed). As for claim 3, Agarawala teaches. The method of claim 1, including: while displaying the view of the communication session, displaying a second control object that, when activated, causes the first computer system to perform a respective operation that ceases to share a first application that is currently shared in the communication session (par. 48 hide/delete remove anything from AR/VR i.e. app location). As for claim 4, Agarawala teaches. The method of claim 1, including: while displaying the view of the communication session, displaying a respective identifier corresponding to a currently selected virtual augmentation that is applied to the three-dimensional environment (par. 28; fig. 3 superimpose grid mesh screen onto 3D AR/VR environment). As for claim 5, Agarawala teaches. The method of claim 1, including: while displaying the view of the communication session, displaying a third control object that, when activated, causes the first computer system to remove a first set of currently selected virtual augmentations that is applied to the three-dimensional environment (par. 90 desk top applications, files, images are displayed in a 3D space in a layout that can be interacted with and changed). As for claim 6, Agarawala teaches. The method of claim 1, including: while displaying the view of the communication session, displaying a fourth control object that, when activated, causes the first computer system to perform a respective operation that shares a respective set of virtual augmentations that is available to be applied to the three-dimensional environment with at least one other participant of the communication session (par. 23 fig. 2B AR/VR wherein the AR shows a passthrough and the VR shows a non-passthrough). As for claim 7, Agarawala teaches. The method of claim 1, including: while displaying the view of the communication session, displaying an indicator of a current level of immersion at which the view of the communication session is displayed, wherein the current level of immersion is selected from a first level of immersion and a second level of immersion, and wherein the second level of immersion includes reduced passthrough of a physical environment in the view of the communication session (Fig. 3, par. 28 the AR environment shows a passthrough that the user is able to see physical environment; Fig.4, par.33 VR environment shows no passthrough of the physical environment and instead is completely rendered in 3D space) . As for claim 8, Agarawala teaches. The method of claim 1, including: while displaying the view of the communication session, displaying respective representations of one or more participants of the communication session in a control region, wherein the respective representations of the one or more participants shown in the control region are distinct from respective representations of the one or more participants shown in the view of the three-dimensional environment (fig. 10B and 11 show representations of one or more participants in communications that show them collaborating by interacting with virtual objects in shared spatial environment). As for claim 9, Agarawala teaches. The method of claim 8, including: while displaying the view of the communication session, displaying a first selectable option in association with a visual indication of a respective participant in the control region, wherein the first selectable option, when activated, removes the respective participant from the communication session (par. 65 the AR system may visualize everything that is said, and allow the user(s) to decide what to keep or discard; par. 48 hide/delete (removing an element from the AR display area; par. 58 authorization to join a space/meeting in 3D space). As for claim 10, Agarawala teaches. The method of claim 1, including: while displaying the view of the communication session, displaying a second selectable option in association with a visual indication of a respective participant in the control region, wherein the second selectable option, when activated, mutes audio input from the respective participant in the communication session (par. 298,300 mute button functionality in the session of multiple users). As for claim 11, Agarawala teaches. The method of claim 1, including: while displaying the view of the communication session, displaying a third selectable option in association with a visual indication of a respective participant in the control region, wherein the third selectable option, when activated, causes the respective representation of the respective participant to be removed from the view of the three-dimensional environment (par. 65 the AR system may visualize everything that is said, and allow the user(s) to decide what to keep or discard; par. 48 hide/delete (removing an element from the AR display area; par. 58 authorization to join a space/meeting in 3D space). As for claim 12, Agarawala teaches. The method of claim 1, including: while displaying the view of the communication session, displaying a fifth control object that, when activated, causes the first computer system to display an input area in the view of the communication session for composing a text message that is to be displayed in a first three-dimensional region of the three-dimensional environment (par. 83 display of messages from messaging applications). As for claim 13, Agarawala teaches. The method of claim 1, including: while displaying the view of the communication session, displaying a sixth control object that, when activated, causes the first computer system to display an input area in a second three-dimensional region of the three-dimensional environment, wherein the input area is configured to receive drawing inputs from multiple participants of the communication session and present the drawing inputs in the second three-dimensional region of the three-dimensional environment (par. 66 drawings; par. 320 white board interactions). As for claim 14, Agarawala teaches. The method of claim 13, including, while displaying the view of the communication session: in accordance with a determination that the view of the communication session includes respective representations of participants of the communication session at respective first positions in the view of the three-dimensional environment in accordance with virtual spatial relationships of the participants in the three-dimensional environment, displaying a sixth control object that, when activated, causes the first computer system to display an input area in a second three-dimensional region of the three-dimensional environment, wherein the input area is configured to receive drawing inputs from multiple participants of the communication session and present the drawing inputs in the second three-dimensional region of the three-dimensional environment; and in accordance with a determination that the view of the communication session includes representations of the participants of the communication session at respective second positions in the view of the three-dimensional environment forgoing displaying the sixth control object (par. 133-135 conference collaboration between users; par. 320 inclusion of white board in a meeting space for interaction). As for claim 15, Agarawala teaches. The method of claim 1, including: while displaying the view of the communication session, displaying respective status indicators for one or more participants of the communication session, wherein the respective status indicators of the one or more participants are indicative of respective manners by which the one or more participants are engaged in the communication session (par. 59-60 personal indicators for indication of interaction). As for claim 16, Agarawala teaches. The method of claim 1, including: while displaying respective representations of one or more participants of the communication session in the control region of the communication session, detecting a first input directed to the respective representation of a first participant of the one or more participants of the communication session; and in response to detecting the first input, providing a first spatial cue corresponding to a first spatial location of the first participant in the three-dimensional environment (FIG. 21A illustrates that different users that enter or that are part of the AR environment in a particular room or workspace may each be assigned or configured with their own mesh. The individual mesh may enable a user to bring display elements closer to the user for viewing at a default distance (which may be configured by the user). As for claim 17, Agarawala teaches. The method of claim 16, wherein providing the first spatial cue corresponding to the first spatial location of the first participant in the three-dimensional environment includes outputting a first spatial audio output with a first virtual location corresponding to the first spatial location of the first participant in the three-dimensional environment (par. 62 , the AR system may include perspective sound for the various participants in a workspace). As for claim 18, Agarawala teaches. The method of claim 16, wherein providing the first spatial cue corresponding to the first spatial location of the first participant in the three-dimensional environment includes visually highlighting a representation of the first participant in the view of the three-dimensional environment (fig. 57 a form of highlight of the user is visually depicted here; par. 190). As for claim 19, Agarawala teaches. The method of claim 1, including: while displaying the view of the communication session, displaying respective representations of one or more applications that are shared in the communication session; while displaying the respective representations of one or more applications that are shared in the communication session, detecting a second input directed to the respective representation of a first application of the one or more applications that are shared in the communication session; and in response to detecting the second input, providing a second spatial cue corresponding to a second spatial location of the first application in the three-dimensional environment (In FIG. 59A, an AR home screen featuring multiple different applications may be shown. In FIG. 59B, a selection of the Twitter® application may cause the unselected applications to vertically drop, stack the shown images, close, or otherwise become less visually present in the user's perspective or view of the AR home screen). As for claim 20, Agarawala teaches. The method of claim 19, wherein providing the second spatial cue corresponding to the second spatial location of the first application in the three-dimensional environment includes outputting a second spatial audio output with a second virtual location corresponding to the second spatial location of the first application in the three-dimensional environment (par. 328 sound files can be played back; par. 62 the participants who may be wearing AR headsets will hear the sound as if it is coming from the direction of the avatar or graphic, as if they were physically present in the same room as the other participants.). As for claim 21, Agarawala teaches. The method of claim 19, wherein providing the second spatial cue corresponding to the second spatial location of the first application in the three-dimensional environment includes visually highlighting a user interface of the first application in the view of the three-dimensional environment (fig. 48 selected application is visually distinct from other apps being displayed in users AR/VR Dock). As for claim 22, Agarawala teaches. The method of claim 1, including: while the user interface for controlling the communication session is displayed, detecting movement of a gaze input directed to the user interface away from the user interface; and in response to detecting the movement of the gaze input away from the user interface, ceasing to display the user interface in the view of the three-dimensional environment (par. 59 As shown in the example, each user's gaze may be indicated by a pointer, dot, laser, or other indicator within the environment. The indicator may enable a user to more accurately select various objects for movement or manipulation within the AR/VR environment. Par.60 In an embodiment, each user may only see his own indicator in the AR/VR environment. In another embodiment, any given user may make visible his indicator to other users). As for claim 23, Agarawala teaches. The method of claim 1, including: while the user interface for controlling the communication session is displayed, detecting movement of a hand of the first user in a direction; and in response to detecting the movement of the hand of the first user in the direction, ceasing to display the user interface in the view of the three-dimensional environment (par. 111,193 gesture to close tabs). As for claim 24, Agarawala teaches. The method of claim 1, including: while the communication session is ongoing, receiving a request to display controls for the communication session; and in response to receiving the request to display controls for the communication session: in accordance with a determination that the view of the communication session includes respective representations of participants of the communication session at respective first positions in the view of the three-dimensional environment in accordance with virtual spatial relationships of the participants in the three-dimensional environment, displaying the user interface for controlling the communication session including the first control object; in accordance with a determination that the view of the communication session includes representations of the participants of the communication session at respective second position in the view of the three-dimensional environment, displaying a different user interface for controlling the communication session that includes a plurality of control objects but does not include a control object for modifying an appearance of a three-dimensional region of the three-dimensional environment (par. 341 example of meeting settings that the user can adjust). As for claim 27, Agarawala teaches. The method of claim 1, wherein: and in accordance with a determination that activation of the first control corresponds to a request to share the second set of virtual augmentations including the second virtual lighting with the second user, the three-dimensional region of the three-dimensional environment 1s displayed, via the second display generation component, with the second virtual lighting (par.58 and 68 sharing via invite sent to other users to alter/change their current virtual room to the shared virtual room). Agarawala does not teach finite control of lighting however in the same field of endeavor Mueller teaches in response to detecting the first user input that activates the first control object: in accordance with a determination that activation of the first control corresponds to a request to share the first set of virtual augmentations including the first virtual lighting with the second user, the three-dimensional region of the three-dimensional environment is displayed, via the second display generation component, with the first virtual lighting (par. 274 designed lighting groups can be created and controlled via user interface for virtual environment and real world environment). It would have been obvious to one of ordinary skill in the art at the time of filing the claimed invention to combine Mueller into Agarawala because Mueller suggests in paragraph 24, creating coordinated lighting effects presents many challenges, particularly in how to create complex effects that involve multiple lighting units in unusual geometries. A need exists for improved systems for creating and deploying lighting shows. A need also exists for improved systems for allowing users to create and modify lighting effects in real-time, such as during audio/visual performances that have a lighting component. Agarawala in view of Muller does not suggest changing from a first to second different scenery having different lighting; however in the same field of endeavor Beall teaches in accordance with a determination that activation of the first control object corresponds to a request to share the first set of virtual augmentations including the first immersive experience having the first virtual lighting and the first virtual scenery with the second user, the three-dimensional region of the three-dimensional environment is displayed, via the second display generation component, with the first immersive experience having the first virtual lighting and the first virtual scenery; and in accordance with a determination that activation of the first control object corresponds to a request to share the second set of virtual augmentations including the second immersive experience having the second virtual lighting and the second virtual scenery with the second user, the three-dimensional region of the three-dimensional environment is displayed, via the second display generation component, with the second immersive experience having the second virtual lighting and the second virtual scenery (col. 14, lines 1-16; col. 4, lines 30-38; col. 14, lines 33-41; fig. 15 and col. 17, line 54 – col. 19, line 32 which teaches a presentation designer home screen that allows for a presenter user to display a navigation control for changing worlds and in which these worlds can be designed and shared collaboratively with a second to the nth user in a shared communication session; fig. 6 shows the navigation controller for changing worlds, fig. 9 is one world and fig. 14 is a different world having different virtual scenery and virtual lighting all of which can be designed in the presentation designer home screen as depicted in fig. 36; presentation of other avatars in shared session depicted in fig. 15). It would have been obvious to one of ordinary skill in the art before the effective filing date to combine Beall into Agarawala as modified by Muller because Beall suggests in column one “Augmented Reality (AR) generally refers to a computer simulated environment combined with the real world. Conventionally, the elements of the real world are augmented with computer generated graphics. Often, translucent stereoscopic headsets are worn by the user in AR simulations to enable a wearer to view the real world through the headset while also being able to view computer generated graphics. Movement of participants and/or objects in interactive VR and AR simulations may be tracked using various methods and devices. Multi-user environments greatly expands the potential of virtual environments by allowing two or more users to inhabit the same virtual world. As such, people can socialize, play games, or otherwise interact in an entirely digital/virtual context. Moreover, a user can be in a completely different part of the planet and still experience the “physical” presence of another participant.” As for claim 28, Agarawala teaches. The method of claim 1, wherein modifying the appearance of the three- dimensional region of the three-dimensional environment for the first user of the first display generation component, includes: in accordance with a determination that the first control object corresponds to a third set of virtual augmentations corresponding to a first virtual scene, changing the three-dimensional environment to display the first virtual scene via the first display generation component; and in accordance with a determination that the first control object corresponds to a fourth set of virtual augmentations corresponding to a second virtual scene, different from the first virtual scene, changing three-dimensional environment to display the second virtual scene via the first display generation component (par.58 and 68; a user may have logged into the system and may want to join a meeting in progress. The various workspaces may be those meetings or workspaces to which the user has been invited, is authorized to join, or that is associated with a user account. Then, for example, the user may join any of the AR and/or VR sessions in progress and see and interact with the other users and data of the sessions that are already in progress. Or, for example, a user may select any active user and create a new workspace. Sharing via invite sent to other users to alter/change their current virtual room to the shared virtual room). As for claim 29, Agarawala teaches. The method of claim 28, wherein: in accordance with a determination that the second user has accepted a request to share a respective virtual scene corresponding to the first control object, the respective virtual scene is displayed via the second display generation component; and in accordance with a determination that the second user has declined the request to share the respective virtual scene corresponding to the first control object, the respective virtual scene is not displayed via the second display generation component (par. 58 only authorized users are able to access shared rooms thereby a second user may request a room but if they are not authorized they are inherently declined entry). (Note:) It is noted that any citation to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006,1009, 158 USPQ 275, 277 (CCPA 1968)). Response to Arguments Applicant’s arguments with respect to claim(s) 1-29 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Inquires Any inquiry concerning this communication should be directed to NICHOLAS AUGUSTINE at telephone number (571)270-1056. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. PNG media_image1.png 208 559 media_image1.png Greyscale /NICHOLAS AUGUSTINE/Primary Examiner, Art Unit 2178 March 4, 2026
Read full office action

Prosecution Timeline

Feb 16, 2024
Application Filed
Oct 23, 2024
Response after Non-Final Action
Mar 21, 2025
Non-Final Rejection — §103
Jun 30, 2025
Interview Requested
Jul 18, 2025
Response Filed
Oct 09, 2025
Final Rejection — §103
Dec 16, 2025
Interview Requested
Jan 07, 2026
Examiner Interview Summary
Jan 07, 2026
Applicant Interview (Telephonic)
Jan 13, 2026
Request for Continued Examination
Jan 25, 2026
Response after Non-Final Action
Mar 04, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598212
Cybersecurity Risk Analysis and Modeling of Risk Data on an Interactive Display
2y 5m to grant Granted Apr 07, 2026
Patent 12584752
VISUAL VEHICLE-POSITIONING FUSION SYSTEM AND METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12586264
WORD EVALUATION VALUE ACQUISITION METHOD, APPARATUS AND PROGRAM
2y 5m to grant Granted Mar 24, 2026
Patent 12578836
USER INTERFACE FOR INTERACTING WITH AN AFFORDANCE IN AN ENVIRONMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12580920
SYSTEM AND METHOD FOR FACILITATING USER INTERACTION WITH A SIMULATED OBJECT ASSOCIATED WITH A PHYSICAL LOCATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+27.8%)
3y 9m
Median Time to Grant
High
PTA Risk
Based on 814 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month