Prosecution Insights
Last updated: April 19, 2026
Application No. 18/463,135

INTERACTION METHOD AND APPARATUS OF VIRTUAL SPACE, DEVICE, AND MEDIUM

Non-Final OA §101§102§103
Filed
Sep 07, 2023
Examiner
COFINO, JONATHAN M
Art Unit
2614
Tech Center
2600 — Communications
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
2y 4m
To Grant
94%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
130 granted / 210 resolved
At TC average
Strong +32% interview lift
Without
With
+32.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
13 currently pending
Career history
223
Total Applications
across all art units

Statute-Specific Performance

§101
6.4%
-33.6% vs TC avg
§103
64.7%
+24.7% vs TC avg
§102
10.2%
-29.8% vs TC avg
§112
12.3%
-27.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 210 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on/after Mar. 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 20 is ineligible under 35 U.S.C. 101 because this claim recites: “A computer-readable storage medium for storing computer programs …” The specification discloses in ¶ [0262]: “When implemented using software, it may be fully or partially implemented in the form of the computer program product [which] comprises … computer instruction(s). When loading and executing the computer program instruction on the computer, all or part of the processes or functions … are generated. The computer may be a general-purpose computer, a specialized computer, a computer network, or other programmable devices. The computer instruction may be stored in the computer-readable storage medium or transmitted from one computer-readable storage medium to another, for example, the computer instruction can be transmitted from a web site, a computer, a server, or a data center through the wired method (e.g., coaxial cable, fiber optic, digital subscriber line (DSL)) or the wireless method (e.g., infrared, wireless, microwave, etc.) to another website site, computer, server, or data center.” The broadest reasonable interpretation of a claim drawn to a computer-readable storage medium typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer-readable storage medium, particularly when the specification is silent. Applicant is advised to amend the claim to exclude such transitory embodiments by adding “non-transitory” to “computer-readable storage medium …”, as “non-transitory computer-readable storage medium …”, which would render the claim statutory. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-6, 8-10, 13-14, and 18-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Terrano (U.S. PG-PUB 2020/0090401, ‘TERRANO’). Regarding claim 1, TERRANO discloses an interaction method of a virtual space, comprising: presenting an interaction navigation panel in the virtual space in response to a wake-up instruction of the virtual space, wherein the interaction navigation panel comprises at least two pending interaction objects (TERRANO; ¶ 0041; “FIG. 3A illustrates an example usage of an [AR] system 300 by a first user 360 interacting with a second user 370 behind virtual display panels. The [AR] system 300 may include a wearable headset 362 worn by the first user 360 for reading or working activities. The headset 360 may render a number of virtual panels [‘pending interaction objects’] (e.g., 310, 320, 330, 340) to display information to the first user 360 [‘presenting an interaction navigation panel in the virtual space’].”); in response to a triggering operation on an interaction object of the pending interaction objects, determining a target display panel of an interaction page associated with the interaction object (TERRANO; ¶ 0045; “… when a virtual panel (e.g., 320) is turned into transparent or translucent, … visual anchor(s) may be displayed to indicate the disappeared virtual panel. … when the virtual panel 320 becomes transparent, the visual anchor 324 may be displayed at a corner of the transparent virtual panel 320 in an [unintrusive] manner to the view of the first user 360. The visual anchor 324 may be a corner-fitting object or an icon [‘interaction object’] associated with the virtual panel 320 [‘target display panel’]. The visual anchor 324 may have an opacity that enables a clear visual effect to the first user 360. When the first user 360 ends the interaction with the second user 370 and wants to bring back the virtual panel 320, the first user 360 may focus his/her eyes on the visual anchor 324. The system 300 may detect that the first user 360 is looking at the visual anchor 324 and adjust the opacity of the virtual panel 320 [‘determining a target display panel of an interaction page’] to make it visible [‘triggering operation on an interaction object of the pending interaction objects’]. … the visual anchor 318 associated with the virtual panel 310 may be displayed (e.g., at a corner of the virtual panel 310) when the virtual panel 310 is visible to the first user 360.”); PNG media_image1.png 574 642 media_image1.png Greyscale if the target display panel is a close/long-range panel, waking up the close/long-range panel in the virtual space and displaying the interaction page associated with the interaction object on the close/long-range panel (TERRANO; ¶ 0045; “The visual anchor 324 may have an opacity that enables a clear visual effect to the first user 360. When the first user 360 ends the interaction with the second user 370 and wants to bring back the virtual panel 320 [‘target display panel’], the first user 360 may focus his/her eyes on the visual anchor 324. The system 300 may detect that the first user 360 is looking at the visual anchor 324 [‘interaction object on the close/long-range panel’] and adjust the opacity of the virtual panel 320 to make it visible [‘waking up the close/long-range panel in the virtual space and displaying the interaction page’].”), wherein the close/long-range panels are configured to display independently and in different positions (TERRANO; ¶ 0046; “FIG. 3B illustrates … usage of an [AR] system 300 by a user 360 watching a TV 380 behind virtual display panels. The user 360 may use the virtual panels (e.g., 310, 320, 330, 340) [‘close-range panels’] for … activities. The user 360 may occasionally watch a TV 380 which is partially behind [‘long-range panel’] the virtual panels 310/320 … The system 300 may determine the vergence distance and gazing point of the user 360 using an eye tracking system. As soon as the user 360 moves his eyes from the virtual panels to the TV 380, the system 300 may detect that as an indication of the user 360 to look through the virtual panels (e.g., 310, 320) that interfere with the view of the user 360 [‘waking up the long-range panel in the virtual space and displaying the interaction page associated with the interaction object on the long-range panel’]. … the system 300 may determine that the vergence distance of the user 360 is beyond the virtual panels (e.g., 310/320) for a threshold distance and the virtual panels 310/320 are at least partially within the view of the user looking at the TV 380. The system 300 may change the opacity of the virtual panels 310/320 into transparent or translucent to allow the user to see through them. When the virtual panels 310/320 become transparent or translucent, the associated visual anchors 318/324 may be displayed at the corners of the transparent virtual panel 310/320, respectively. When the user 360 moves his eyes from the TV 380 back to the virtual panels 310/320, the user 360 just needs to focus on his eyes on the visual anchors 318/324. The system 300 may detect that the user is looking at the visual anchor 318/324 and change the corresponding panel 310/320 back to visible [‘waking up the close-range panel in the virtual space and displaying the interaction page associated with the interaction object on the close-range panel’].”). Regarding claim 18, TERRANO discloses an interaction apparatus of a virtual space, comprising: a first response module for … ([Please see the treatment of this parallel limitation above.]); a second response module for determining a target display panel of an interaction page associated with an interaction object in response to a triggering operation on the interaction object of the pending interaction objects (TERRANO; ¶ 0026-27; FIG. 6; ¶ 0053; “[0053] At step 660, the system may display a visual anchor [‘second response module for determining a target display panel’] for the disappeared virtual panel. The visual anchor may be associated with the virtual panel and may be persistently displayed to the user in a unintrusive manner … at a corner of the virtual panel. The virtual anchor may be a corner-fitting object, a title bar, a wireframe, an element of displayed content, etc. At step 670, the system may determine whether the user is looking at the visual anchor. The system may compare the vergence distance of the user to the distance between the visual anchor and the user. When the difference between the vergence distance of the first user and the second distance between the visual anchor and the first user is within a second threshold distance, the system may determine that the user is looking at the visual anchor [‘triggering operation on the interaction object’]. At step 680, the system may make the virtual panel reappear [‘target display panel’] by increasing the opacity of the virtual panel back to the first opacity or another opacity that makes the virtual panel visible to the user.”); a first display module for … ([Please see the treatment of this parallel limitation above.]); a second display module for … ([Please see the treatment of this parallel limitation above.]) if the target display panel is the long-range panel (TERRANO; FIGS. 3A-3B, element ‘TV 380’); and ([Please see the treatment of this parallel limitation above.]). Regarding claim 19, TERRANO discloses an electronic device, comprising: a processor and a memory (TERRANO; FIG. 9, ‘PROCESSOR 902’ and ‘MEMORY 904’; ¶ 0078), wherein the memory is used to store computer programs, and the processor is used to call and run the computer programs stored in the memory to execute (TERRANO; FIG. 9; ¶ 0079; “… processor 902 includes hardware for executing instructions, such as those making up a computer program.”) an interaction method of a virtual space, wherein the interaction method of the virtual space comprises: … ([The remaining limitations are repeated verbatim from those recited in claim 1.]). Regarding claim 2, TERRANO discloses the method according to claim 1, wherein the determining the target display panel of the interaction page associated with the interaction object comprises: determining a type of the interaction object; and determining the target display panel of the interaction page associated with the interaction object based on the type of the interaction object (TERRANO; ¶ 0045; “… when a virtual panel (e.g., 320) is turned into transparent or translucent, … visual anchor(s) may be displayed to indicate the disappeared virtual panel. … when the virtual panel 320 becomes transparent, the visual anchor 324 may be displayed at a corner of the transparent virtual panel 320 in an unintrusive manner to the view of the first user 360. The visual anchor 324 may be a corner-fitting object or an icon associated with the virtual panel 320 [‘determining a type of the interaction object’]. The visual anchor 324 may have an opacity that enables a clear visual effect to the first user 360. When the first user 360 ends the interaction with the second user 370 and wants to bring back the virtual panel 320, the first user 360 may focus his/her eyes on the visual anchor 324. The system 300 may detect that the first user 360 is looking at the visual anchor 324 and adjust the opacity of the virtual panel 320 to make it visible [‘determining the target display panel of the interaction page associated with the interaction object’]. … the virtual anchor may be displayed only when the associated virtual panel has been made transparent or translucent. … the virtual anchor may be displayed both when the associated virtual panel is visible and when the associated virtual panel is made transparent. … the visual anchor 318 associated with the virtual panel 310 [‘determining a type of the interaction object’] may be displayed (e.g., at a corner of the virtual panel 310) when the virtual panel 310 is visible to the first user 360.”). Regarding claim 3, TERRANO discloses the method according to claim 2, wherein the determining the type of the interaction object, comprises: obtaining identification information of the interaction object (TERRANO; FIGS. 3A-3B; ¶ 0045; [The Examiner asserts that the ‘visual anchors’ are analogous to ‘identification information of the interaction object’, as when a user visually identifies them, the ‘interaction objects’ are then focused upon and/or selected by the user.]); and determining the type of the interaction object based on the identification information (TERRANO; ¶ 0041; “The headset 360 may render … virtual panels (e.g., 310, 320, 330, 340) to display information to the first user 360. … the virtual panels 310, 320, 330, and 340 may display a web content (e.g., a title 310, a text content 314, a button 316, etc.), a picture 322, a calendar 334, and a clock 340, [‘types of the interaction objects’]”). Regarding claim 4, TERRANO discloses the method according to claim 2, wherein the determining the target display panel of the interaction page associated with the interaction object based on the type of the interaction object, comprises: searching for the target display panel of the interaction page associated with the interaction object (TERRANO; ¶ 0042; “… the first user 360 may be looking at the virtual panel 320 at the gazing direction 368 … The first user may move his eyes from the virtual panel 320 … The system 300 may use an eye tracking system to tack the vergence movement of the first user 360 and determine the vergence distance of the first user 360. The system 300 may compare the vergence distance of the first user 360 to the distance from the virtual panel 320 to the first user 360.”) in a mapping relationship between an interaction object type and a display panel based on the type of the interaction object (TERRANO; ¶ 0041; “FIG. 3A illustrates an example usage of an [AR] system 300 by a first user 360 interacting with a second user 370 behind virtual display panels. The artificial reality system 300 may include a wearable headset 362 worn by the first user 360 for reading or working activities. The headset 360 may render … virtual panels (e.g., 310, 320, 330, 340) [‘target display panel’] to display information to the first user 360. … the virtual panels 310, 320, 330, and 340 may display a web content (e.g., a title 310, a text content 314, a button 316, etc.), a picture 322, a calendar 334, and a clock 340 [‘type of the interaction object’], respectively.”). Regarding claim 5, TERRANO discloses the method according to claim 4, further comprising: if the target display panel is not found (TERRANO; FIG. 6; ¶ 0052; “The virtual panel having the second opacity may allow the user to see through the virtual panel. For example, the virtual panel with the second opacity may be transparent and the virtual panel may be invisible to the user. The virtual panel may disappear from the view of the user [‘target display panel is not found’] when the system changes the virtual panel's opacity to the second opacity. As another example, the virtual panel may be translucent and may allow the user the see through.”), determining the long-range panel as the target display panel of the interaction page associated with the interaction object according to a predetermined display rule (TERRANO; FIG. 3B; ¶ 0046; “… the system 300 may determine that the vergence distance of the user 360 is beyond the virtual panels (e.g., 310, 320) for a threshold distance [‘predetermined display rule’] and the virtual panels 310 and 320 are at least partially within the view of the user looking at the TV 380 [‘determining the long-range panel as the target display panel of the interaction page’].”). Regarding claim 6, TERRANO discloses the method according to claim 1, wherein the interaction page comprises a first interaction control, and the method further comprises: presenting a virtual input model in the virtual space in response to a triggering operation on the first interaction control (TERRANO; FIGS. 3A-3B; ¶ 0041; “The headset 362 may also render a virtual keyboard 350 [‘presenting a virtual input model in the virtual space’] for the first user 360 to interact with the [AR] system 300.”); and presenting corresponding input interaction information on the interaction page based on an input operation by a user acting on the virtual input model (TERRANO; FIGS. 3A-3B; ¶ 0041; “… the virtual panel 310 … may display a web content (e.g., … a text content 314, a button 316, etc.)”). Regarding claim 8, TERRANO discloses the method according to claim 6, wherein the presenting the corresponding input interaction information on the interaction page based on the input operation by the user acting on the virtual input model, comprises: PNG media_image1.png 574 642 media_image1.png Greyscale displaying corresponding text (TERRANO; FIGS. 3A-3B; ¶ 0041; “… the virtual panels 310, 320, 330, and 340 may display a web content (e.g., a title 310, [Facebook 312] a text content 314 [‘displaying corresponding text … interaction information’], a button 316, etc.), a picture 322, a calendar 334, and a clock 340”); Regarding claim 9, TERRANO discloses the method according to claim 6, wherein the virtual input model comprises an input region and a display region (TERRANO; FIGS. 3A-3B; ¶ 0041; “The headset 360 may render … virtual panels (e.g., 310, 320, 330, 340) to display information to the first user 360. … The headset 362 may also render a virtual keyboard 350 [‘input region’] for the first user 360 to interact with the [AR] system 300.”); the presenting the corresponding input interaction information on the interaction page based on the input operation by the user acting on the virtual input model, comprises: displaying the corresponding input interaction information in the display region based on an input operation by the user acting on the input region (TERRANO; ¶ 0041; “The virtual panels may have the same distance or different distances to the first user 360. When the first user is interacting with the virtual panels (e.g., reading information on the virtual panel 310 …” [The Examiner notes that text is displayed on ‘panel 310’ which the user may type into him/herself.]); and displaying the corresponding input interaction information on the interaction page in response to a triggering operation on a sending button in the input region (TERRANO; ¶ 0041; “The headset 360 may render … virtual panels (e.g., 310, 320, 330, 340) to display information to the first user 360. … the virtual panels 310, 320, 330, and 340 may display a web content (e.g., a title 310, a text content 314, a button 316 [‘sending button in the input region’], etc.), a picture 322, a calendar 334, and a clock 340, respectively.”). Regarding claim 10, TERRANO discloses the method of claim 6, wherein before the presenting the virtual input model in the virtual space, the method further comprises: hiding the interaction navigation panel presented in the virtual space (TERRANO; FIG. 6; ¶ 0052; “At step 650, when the vergence distance of the first user is greater than the distance between the virtual panel and the user by a first threshold distance, the system may adjust the opacity of the virtual panel to a second opacity which is less opaque than the first opacity. The virtual panel having the second opacity may allow the user to see through the virtual panel. … the virtual panel with the second opacity may be transparent and the virtual panel may be invisible to the user. The virtual panel may disappear from the view of the user when the system changes the virtual panel's opacity to the second opacity. … the virtual panel may be translucent and may allow the user the see through.”). Regarding claim 13, TERRANO discloses the method according to claim 1, wherein after displaying the interaction page associated with the interaction object on the close-range panel, the method further comprises: if any other interaction object in the interaction navigation panel is detected to be triggered and a target display panel of an interaction page associated with the other interaction object is the long-range panel, waking up the long-range panel in the virtual space and displaying the interaction page associated with the other interaction object on the long-range panel (TERRANO; ¶ 0046; “FIG. 3B illustrates an example usage of an artificial reality system 300 by a user 360 watching a TV 380 [‘long-range panel’] behind virtual display panels. The user 360 may use the virtual panels (e.g., 310-340) for reading or working activities. The user 360 may … watch a TV 380 which is partially behind the virtual panels 320 and partially behind the virtual panel 310. The system 300 may determine the vergence distance and gazing point of the user 360 using an eye tracking system. As soon as the user 360 moves his eyes from the virtual panels to the TV 380, the system 300 may detect that as an indication of the user 360 to look through the virtual panels (e.g., 310, 320) that interfere with the view of the user 360. … the system 300 may determine that the vergence distance of the user 360 is beyond the virtual panels (e.g., 310, 320) for a threshold distance and the virtual panels 310 and 320 are at least partially within the view of the user looking at the TV 380. The system 300 may change the opacity of the virtual panels 310 and 320 into transparent or translucent to allow the user to see through them.”). Regarding claim 14, TERRANO discloses the method according to claim 1 wherein after displaying the interaction page associated with the interaction object on the long-range panel, the method further comprises: if any other interaction object in the interaction navigation panel is detected to be triggered and a target display panel of an interaction page associated with the other interaction object is the close-range panel, waking up the close-range panel in the virtual space and displaying the interaction page associated with the other interaction object on the close-range panel (TERRANO; ¶ 0046; “When the user 360 moves his eyes from the TV 380 back to the virtual panel 310 or 320 [‘close-range panels’], the user 360 just needs to focus on his eyes on the visual anchor 318 or 324. The system 300 may detect that the user is looking at the visual anchor 318 or 324 and change the corresponding panel 310 or 320 back to visible. The virtual panels 330 and 340 may or may not change since they are not interfering with the user view for watching the TV 380.”). Regarding claim 20, TERRANO discloses a computer-readable storage medium for storing computer programs which cause a computer to execute the interaction method of the virtual space (TERRANO; FIG. 9; ¶ 0081; “… storage 906 includes mass storage for data/instructions.” ¶ 0085; “… computer-readable non-transitory storage medium or media may include … semiconductor-based or other integrated circuit(s) (IC(s)) …”) according to claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 USC 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 7 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over TERRANO as applied to claim 6 above, and further in view of Regenbrecht et al. ("A Tangible AR Desktop Environment", published 2001, 'REGENBRECHT'). Regarding claim 7, TERRANO discloses the method according to claim 6; however, TERRANO does not explicitly disclose that the presenting the virtual input model in the virtual space, comprises: PNG media_image2.png 327 721 media_image2.png Greyscale if the interaction page is displayed on the close/long-range panel, presenting a close/long-range virtual input model corresponding to the close/long-range panel in the virtual space, which REGENBRECHT discloses (REGENBRECHT; FIG. 7; [The Examiner notes that the figure depicts three windows, such as those that are presented in Microsoft Windows, both as a conventional 2-D rendering and as a three-dimensional rendering using an HMD. The Examiner notes that the central window is placed rearwardly, while the windows depicting the file system and the model of the car are placed closer to the viewer.]), wherein the close-range virtual input model and the long-range virtual input model are displayed independently and in different positions (REGENBRECHT; FIG. 7; [The Examiner notes that the three windows are analogous to the close/long-range virtual input models, which are independent/distinct windows and are placed differently throughout the 3-D desktop space.]). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method according to claim 6 of TERRANO to include the disclosure that if the interaction page is displayed on the close/long-range panel, presenting a close/long-range virtual input model corresponding to the close/long-range panel in the virtual space, wherein the close-range virtual input model and the long-range virtual input model are displayed independently and in different positions of REGENBRECHT. The motivation for this modification is to intuitively simulate the real environment of a human user by inserting 2-D graphics upon his/her actual desktop workspace, bringing the metaphorical computer operating system desktop back into the physical world by superimposing the graphical windows upon the physical desk by augmented reality. Regarding claim 12, TERRANO-REGENBRECHT disclose the method according to claim 7, wherein if the interaction page is displayed on the close-range panel, the presenting the first prompt pop-up window associated with the second interaction control in the virtual space, comprises: displaying the first prompt pop-up window associated with the second interaction control at a first predetermined position between the close-range panel and the interaction navigation panel presented in the virtual space (REGENBRECHT; FIG. 7; [The Examiner notes that the placement of the prompt pop-up windows is arbitrary and based upon the physical objects in the scene. The Examiner asserts that the placement of the prompt pop-up windows is an arbitrary design choice.]); and if the interaction page is displayed on the long-range panel, the presenting the first prompt pop-up window associated with the second interaction control in the virtual space, comprises: displaying the first prompt pop-up window associated with the second interaction control at a second predetermined position between the long-range panel and the interaction navigation panel presented in the virtual space (REGENBRECHT; FIG. 7; [The Examiner notes that the placement of the prompt pop-up windows is arbitrary and based upon the physical objects in the scene. The Examiner asserts that the placement of the prompt pop-up windows is an arbitrary design choice.]). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over TERRANO as applied to claim 1 above, and further in view of Katz (U.S. PG-PUB 2004/0078240, 'KATZ'). Regarding claim 11, TERRANO discloses the method according to claim 1; however, TERRANO does not explicitly disclose that the interaction page further comprises at least one presented second interaction control; and that the method further comprises [the following features], which KATZ discloses: PNG media_image3.png 237 429 media_image3.png Greyscale in response to a triggering operation on a second interaction control of the at least one presented second interaction control, presenting a first prompt pop-up window associated with the second interaction control in the virtual space (KATZ; FIGS. 3-4; ¶ 0046-47), wherein the first prompt pop-up window at least comprises a confirmation sub-control and a cancellation sub-control; executing an interaction operation associated with the second interaction control in response to a triggering operation on the confirmation sub-control; and cancelling execution of the interaction operation associated with the second interaction control in response to a triggering operation on the cancellation sub-control (KATZ; FIGS. 3-4; ¶ 47; “… Patient Name pop-up window 402 … is used to personalize the flow sheet to an individual patient. This pop-up window is initiated by clicking on the "change user name" 320 drop-down menu item [‘in response to a triggering operation on a second interaction control’] … The patient enters his name at 404 and when satisfied …, clicks on "OK" button 406 [‘confirmation sub-control’] to save the change [‘interaction operation associated with the second interaction control in response to a triggering operation on the confirmation sub-control’] or "Cancel" button 408 [‘cancellation sub-control’] to exit the patient’s name pop-up window without saving changes [‘interaction operation associated with the second interaction control in response to a triggering operation on the cancellation sub-control’].”). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method according to claim 1 of TERRANO to include the various teachings of KATZ. The motivation for this modification is to allow a user to either enter textual information by confirming the entry with an ‘OK’ button, or to allow a user to cancel the entry with a ‘CANCEL’ button. Claims 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over TERRANO as applied to claim 1 above, and further in view of Pizano (U.S. PG-PUB 2008/0086646, 'PIZANO'). Regarding claim 15, TERRANO discloses the method according to claim 1; however, TERRANO does not explicitly disclose that the method according to claim 1 further comprises: PNG media_image4.png 329 412 media_image4.png Greyscale displaying a second prompt pop-up window in the virtual space, wherein the second prompt pop-up window is displayed in front of the close-range panel, which PIZANO discloses (PIZANO; FIG. 13; ¶ 0090; “… presenting a pop-up window with text fields for a user name and a password … Such a pop-up window may be similar to window 122 …” ¶ 0094; “While the illustrated window 116 is a word processor, it will be appreciated that substantially any application that is compatible with the host operating environment … may be used …” [The Examiner asserts that ‘window 116’ is analogous to the ‘close-range panel’ of the instant claim(s).]). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method according to claim 1 of TERRANO to include the displaying a second prompt pop-up window in the virtual space, wherein the second prompt pop-up window is displayed in front of the close-range panel of PIZANO. The motivation for this modification is to assert security protocols by superseding the active application to authenticate a permitted user. Regarding claim 16, TERRANO-PIZANO disclose the method according to claim 15, wherein the displaying the second prompt pop-up window in the virtual space, comprises at least one of the following: displaying an authentication prompt pop-up window in the virtual space in response to a detection of an authentication instruction (PIZANO; ¶ 0081; “… the recipient client 38 identifies the secure package identifier 86 and prompts the recipient to submit identification and authentication information. The recipient client 38 may prompt the user to submit the identification and authentication information … by presenting … pop-up window(s) with text fields for receiving a user name and a password …”) . Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over TERRANO as applied to claim 1 above, and further in view of Kulewski et al. (U.S. Patent 10,101,891; 'KULEWSKI'). Regarding claim 17, TERRANO discloses the method according to claim 1; however, TERRANO does not explicitly disclose that the method according to claim 1 further comprises: PNG media_image5.png 501 440 media_image5.png Greyscale adjusting the interaction navigation panel, the close-range panel, and/or the long-range panel presented in the virtual space in response to an adjustment operation on the interaction navigation panel, the close-range panel, and/or the long-range panel, wherein the adjusting the interaction navigation panel, the close-range panel, and/or the long-range panel presented in the virtual space (KULEWSKI; FIG. 7; Col. 18, Lines 56-61; “… method 700 implements block 612 of FIG. 6, in which the crop window is changed to fit within the image boundaries.”), comprises: if the adjustment operation is a scaling adjustment operation, performing scaling adjustment on the interaction navigation panel, the close-range panel, and/or the long-range panel based on the scaling adjustment operation (KULEWSKI; FIG. 7; Col. 18, Lines 62-67 ~ Col. 19, Lines 1-4; “In block 702, it is checked whether the crop window was resized based on the user input. … the resizing can be similar to the resizing described above with reference to FIGS. 2-3, based on a user moving a vertex or segment of the crop window within the user interface. … the resizing may cause … portion(s) of the crop window to move outside the image boundaries. If a resizing has been performed, the method continues to block 704 in which the crop window is resized [‘scaling adjustment operation’] to be positioned within the image boundaries.”); if the adjustment operation is an orientation adjustment operation, adjusting an orientation of the interaction navigation panel, the close-range panel, and/or the long-range panel based on the orientation adjustment operation (KULEWSKI; Col. 19, Lines 39-50; “In block 710, it is checked whether relative rotation occurred between the crop window and the image based on the user input. … the user input may have rotated the image relative to the crop window, or rotated the crop window relative to the image. … user input causing such rotation can be provided by selecting a portion of the image or crop window and moving a pointer (e.g., finger on touchscreen, cursor controlled by pointing device, etc.) to rotate the image or crop window. … the rotation can be caused by user input [‘orientation adjustment operation’] such as commands (voice, text, menu, etc.), input in interface fields, etc.”); and if the adjustment operation is a region adjustment operation, performing region adjustment on the interaction navigation panel, the close-range panel, and/or the long-range panel based on the region adjustment operation (KULEWSKI; Col. 19, Lines 8-16; “In block 706, it is checked whether translation between the crop window and the image occurred based on the user input. … the user may have dragged the entire crop window across the image (e.g., using a drag operation) and the image editing interface without resizing the crop window, such that … portion(s) of the crop window are positioned outside the image boundaries. … the user input may have caused the image to be translated relative to the crop window [‘region adjustment operation’].”). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method according to claim 1 of TERRANO to include the various teachings of KULEWSKI. The motivation for this modification is to enhance the viewability of the various panels with the virtual space such that the panels may be more viewable to the user by reducing occlusions and arraying the panels in a more user-friendly manner. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN M COFINO whose telephone number is (303) 297-4268. The examiner can normally be reached Monday-Friday 10A-4P MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at 571-272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JONATHAN M COFINO/ Examiner, Art Unit 2614 /KENT W CHANG/ Supervisory Patent Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Sep 07, 2023
Application Filed
Feb 20, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597201
INTERACTIVE METHOD AND SYSTEM FOR DISPLAYING MEASUREMENTS OF OBJECTS AND SURFACES USING CO-REGISTERED IMAGES AND 3D POINTS
2y 5m to grant Granted Apr 07, 2026
Patent 12597202
GEOLOGICALLY MEANINGFUL SUBSURFACE MODEL GENERATION BASED ON A TEXT DESCRIPTION
2y 5m to grant Granted Apr 07, 2026
Patent 12536207
METHOD AND APPARATUS FOR RETRIEVING THREE-DIMENSIONAL (3D) MAP
2y 5m to grant Granted Jan 27, 2026
Patent 12511829
MAP GENERATION APPARATUS, MAP GENERATION METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM STORING PROGRAM
2y 5m to grant Granted Dec 30, 2025
Patent 12505605
SOLVING LOW EFFICIENCY OF MOVING ADJUSTMENT CAUSED BY CONTROLLING MOVEMENT OF IMAGE USING MODEL PARAMETERS
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
94%
With Interview (+32.2%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 210 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month