Prosecution Insights
Last updated: April 19, 2026
Application No. 18/421,856

METHODS FOR DISPLAYING A USER INTERFACE OBJECT IN A THREE-DIMENSIONAL ENVIRONMENT

Non-Final OA §103§112
Filed
Jan 24, 2024
Examiner
HUYNH, LINDA TANG
Art Unit
2172
Tech Center
2100 — Computer Architecture & Software
Assignee
Apple Inc.
OA Round
1 (Non-Final)
36%
Grant Probability
At Risk
1-2
OA Rounds
3y 8m
To Grant
68%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
100 granted / 274 resolved
-18.5% vs TC avg
Strong +32% interview lift
Without
With
+31.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
30 currently pending
Career history
304
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
53.4%
+13.4% vs TC avg
§102
13.4%
-26.6% vs TC avg
§112
18.6%
-21.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 274 resolved cases

Office Action

§103 §112
DETAILED ACTION This Office Action is sent in response to Applicant's Communication received 01/24/2024 for 18421856. Claims 1-19 are presented. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Information Disclosure Statement The information disclosure statement (IDS) submitted on 10/24/2024 was filed before the mailing date of a first action. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the IDS is being considered by the examiner. Claim Objections Claim 12 is objected to because of the following informalities. Claim 12 recites "the three-dimensional environment , and" which appears to be a typo and has been interpreted as "the three-dimensional [[environment ,]] --environment,-- and". Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 9, 11, and 12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 9 recites the relative term "gradually" which renders the claim indefinite. The term "gradually" is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The cessation of the displayed user interface object has been rendered indefinite by the use of "gradually", because it is not possible to determine what type of display would be considered gradually ceasing. For application of the prior art of record and for purposes of rejection on its merits, "gradually" is being interpreted to mean "[[gradually]] ceasing". Claims 11 and 12 recite the relative term "gradually" which renders the claim indefinite. The term "gradually" is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The animation of the user interface object has been rendered indefinite by the use of "gradually", because it is not possible to determine what type of animation would be considered gradually moving. For application of the prior art of record and for purposes of rejection on its merits, "gradually" is being interpreted to mean "[[gradually]] moving". Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-13 and 15-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Terahata (US 20160364916 A1) in view of Knepper et al. (US 20190349575 A1). As to claim 1, Terahata discloses a method, comprising: at a computer system in communication with a display generation component and one or more input devices [Fig. 13, para 0117-0118, system including display and sensor (read: input device)]: displaying, via the display generation component, a user interface object at a first location in a three-dimensional environment with a first spatial arrangement relative to a viewpoint of a user of the computer system [Fig. 2, para 0058, 0064-0066, 0099, system display displays widget (read: user interface object) at position (read: first location) in virtual space (read: three-dimensional environment) at initial state of center of user field of view with positional relationship to window (read: first spatial arrangement)]; while displaying the user interface object at the first location relative to the three-dimensional environment, detecting, via the one or more input devices, a change in a spatial arrangement of the viewpoint of the user relative to the three-dimensional environment from a second spatial arrangement relative to the three-dimensional environment to a third spatial arrangement relative to the three-dimensional environment [Figs. 2, 4-5, 11, para 0051, 0071-0072, 0102-0103, display widget at initial state position and sensor detects user movement (read: change in spatial arrangement) from right of initial state (read: second spatial arrangement) of field of view to outside (read: third spatial arrangement) field of view relative to virtual space]; and in response to detecting the change in the spatial arrangement of the viewpoint of the user relative to the three-dimensional environment [Figs. 2, 4-5, 11, para 0051, 0071-0072, 0102-0103, sensor detects user moving field of view relative to virtual space]: in accordance with a determination that the change in the spatial arrangement of the viewpoint of the user relative to the three-dimensional environment satisfies one or more criteria, including a criterion that is satisfied when the second spatial arrangement of the viewpoint differs from the third spatial arrangement of the viewpoint by more than a threshold [], displaying the user interface object at a second location, different from the first location, relative to the three-dimensional environment [Figs. 4-5, 11, para 0064, 0072-0073, 0105, display widget at position (read: second location) different from initial state position in virtual space when determining user movement moves widget right of initial state and determining widget position considered outside (read: criterion) of field of view boundary (read: threshold)]; and in accordance with a determination that the change in the spatial arrangement of the viewpoint of the user relative to the three-dimensional environment does not satisfy the one or more criteria because the second spatial arrangement of the viewpoint differs from the third spatial arrangement of the viewpoint by less than the threshold [], maintaining display of the user interface object at the first location relative to the three-dimensional environment [Figs. 2, 4, 11, para 0064, 0071, 0103-0105, display widget at position in virtual space at initial state of center field of view when determining user movement moves widget right of initial state and determining widget position considered inside (read: change does not satisfy criteria) field of view boundary]. However, Terahata does not specifically disclose wherein "a threshold []" is "a threshold amount". Knepper discloses: a criterion that is satisfied when the second spatial arrangement of the viewpoint differs from the third spatial arrangement of the viewpoint by more than a threshold amount [Fig. 6, para 0047, 0051-0052, determine head rotating field of view above threshold value]; and one or more criteria because the second spatial arrangement of the viewpoint differs from the third spatial arrangement of the viewpoint by less than the threshold amount [Fig. 6, para 0047, 0051-0052, determine head rotating field of view smaller than threshold value]. Terahata and Knepper are analogous art to the claimed invention being from a similar field of endeavor of extended reality environments. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify a criterion differing by more or less than a threshold as disclosed by Terahata with a criterion differing by more or less than a threshold amount as disclosed by Knepper with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify Terahata as described above to enable strategic placement of virtual objects [Knepper, para 0066]. As to claim 2, Terahata discloses the method of claim 1, wherein displaying the user interface object at the second location relative to the three-dimensional environment in accordance with the determination that the change in the spatial arrangement of the viewpoint of the user relative to the three-dimensional environment satisfies the one or more criteria includes displaying the user interface object with the first spatial arrangement relative to the viewpoint of the user [Figs. 2, 4, 11, para 0064, 0071, 0103-0105, display widget with unchanged positional relationship to window when determining widget position considered outside of field of view boundary and displaying widget at different position in virtual space from initial state]. As to claim 3, Terahata discloses the method of claim 1, further comprising: while displaying the user interface object at the second location in the three-dimensional environment, detecting, via the one or more input devices, a first input corresponding to a request by the user to change a respective spatial arrangement of the user interface object relative to the viewpoint of the user [Figs. 4-5, para 0051, 0071-0072, display widget at different position in virtual space and sensor detects further user movement (read: first input) further moving field of view including widget in virtual space (read: respective spatial arrangement)]; and in response to detecting the first input, displaying the user interface object at a third location in the three-dimensional environment different from the second location in the three-dimensional environment with a fourth spatial arrangement relative to the viewpoint of the user [Figs. 4-5, para 0071-0072, move displayed widget to further position (read: third location) in relative space with widget portion positioned outside (read: fourth spatial arrangement) user field of view]. As to claim 4, Terahata discloses the method of claim 1, wherein the first spatial arrangement of the user interface object relative to the viewpoint of the user includes a respective distance between the viewpoint of the user and the user interface object [Figs. 2, 10, para 0064-0065, 0088-0089, widget at initial state of center of user field of view defined by distance between user and widget]. As to claim 5, Terahata discloses the method of claim 1, wherein the first spatial arrangement of the user interface object relative to the viewpoint of the user includes a respective orientation between the viewpoint of the user and the user interface object [Figs. 2, 10, para 0064-0065, widget at initial state of center of user field of view positioned in horizontal direction (read: respective orientation)]. As to claim 6, Terahata discloses the method of claim 1, further comprising: n response to detecting the change in the spatial arrangement of the viewpoint of the user relative to the three-dimensional environment [Fig. 11, para 0051, 0102-0103, sensor detects user moving field of view relative to virtual space]: in accordance with the determination that the second spatial arrangement of the viewpoint differs from the third spatial arrangement of the viewpoint by more than the threshold [], ceasing to display the user interface object in the three-dimensional environment [para 0064, 0071-0072, 0103-0105, determine user movement moves widget to position considered outside of field of view boundary including positioning the widget entirely outside the field of view]. However, Terahata does not specifically disclose wherein "the threshold []" is "the threshold amount". Knepper discloses the determination that the second spatial arrangement of the viewpoint differs from the third spatial arrangement of the viewpoint by more than the threshold amount [Fig. 6, para 0047, 0051-0052, determine head rotating field of view above threshold value]. Terahata and Knepper are analogous art to the claimed invention being from a similar field of endeavor of extended reality environments. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the determination differing more than a threshold as disclosed by Terahata with a determination differing by more than a threshold amount as disclosed by Knepper with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify Terahata as described above to enable strategic placement of virtual objects [Knepper, para 0066]. As to claim 7, Terahata discloses the method of claim 1, wherein the change in the spatial arrangement of the viewpoint of the user relative to the three-dimensional environment includes a change in orientation of the viewpoint of the user relative to the three-dimensional environment between the second spatial arrangement of the viewpoint and the third spatial arrangement of the viewpoint [Figs. 2, 4-5, 11, para 0071-0072, 0102-0103, sensor detects user motion moving position (read: orientation) of field of view from right of initial state of field of view to outside of field of view relative to virtual space]. As to claim 8, Terahata discloses the method of claim 1, wherein the change in the spatial arrangement of the viewpoint of the user relative to the three-dimensional environment includes a change in spatial position of the viewpoint of the user relative to the three-dimensional environment between the second spatial arrangement of the viewpoint and the third spatial arrangement of the viewpoint [Figs. 2, 4-5, 11, para 0071-0072, 0102-0103, sensor detects user head motion (read: spatial position) moving field of view from right of initial state of field of view to outside of field of view relative to virtual space]. As to claim 9, Terahata discloses the method of claim 6, wherein ceasing to display the user interface object at the first location in the three-dimensional environment includes gradually ceasing to display the user interface object at the first location in the three-dimensional environment as the spatial arrangement of the viewpoint of the user relative to the three-dimensional environment changes [Figs. 2, 4-5, para 0071-0072, 0103-0105, start to move (read: gradually) widget from initial state position to different position to position where widget is positioned entirely out of view]. As to claim 10, Terahata discloses the method of claim 6, wherein displaying the user interface object at the second location relative to the three-dimensional environment in response to detecting the change in the spatial arrangement of the viewpoint of the user relative to the three-dimensional environment [Figs. 4-5, 11, para 0072-0073, 0075, 0105-0106, display widget at position different from initial state position in virtual space when determining user moves field of view] includes, in accordance with the viewpoint of the user remaining within a threshold [] of a respective spatial arrangement relative to the three-dimensional environment for a threshold period of time, redisplaying the user interface object in the three-dimensional environment [Figs. 6, 11, para 0072-0075, 0105-0106, move widget positioned entirely outside of view to position within field of view in virtual space when detecting (read: threshold period of time) user moves field of view and widget past boundary (read: threshold) outside field of view (read: respective spatial arrangement) in virtual space]. However, Terahata does not specifically disclose wherein "a threshold []" is "a threshold amount". Knepper discloses the viewpoint of the user remaining within a threshold amount of a respective spatial arrangement relative to the three-dimensional environment [Fig. 6, para 0047, 0051-0052, determine threshold value of rotated field of view]. Terahata and Knepper are analogous art to the claimed invention being from a similar field of endeavor of extended reality environments. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the user viewpoint remaining within a threshold as disclosed by Terahata with a user viewpoint remaining within a threshold amount as disclosed by Knepper with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify Terahata as described above to enable strategic placement of virtual objects [Knepper, para 0066]. As to claim 11, Terahata discloses the method of claim 1, wherein displaying the user interface object at the second location relative to the three-dimensional environment in response to detecting the change in the spatial arrangement of the viewpoint of the user relative to the three-dimensional environment includes displaying an animation of the user interface object gradually moving from the first location in the three-dimensional environment to the second location in the three-dimensional environment [Figs. 2, 4, 11, para 0064, 0071, 0074, 0103-0105, move widget from initial state position to changed position in virtual space when determining user movement moves widget within field of view]. As to claim 12, Terahata discloses the method of claim 11, wherein displaying the animation of the user interface object gradually moving from the first location in the three-dimensional environment to the second location in the three-dimensional environment includes: in response to detecting the change in the spatial arrangement of the viewpoint of the user relative to the three-dimensional environment [Fig. 11, para 0102, determine user action moving field of view in virtual space]: displaying, prior to ceasing to display the user interface object in the three-dimensional environment, a first animation corresponding to movement of the user interface object away from the first location in the three-dimensional environment toward the second location in the three-dimensional environment [Figs. 2, 4-5, para 0071-0072, 0074, 0102, move (read: first animation) widget following field of view movement from initial state position to changed position in virtual space before positioning widget entirely outside field of view], and displaying, prior to displaying the user interface object at the second location in the three-dimensional environment, a second animation corresponding to movement of the user interface object toward the second location in the three-dimensional environment [Figs. 2, 4-5, para 0071-0072, 0074, 0102, move (read: second animation) widget following field of view movement to changed position in virtual space from initial state position]. As to claim 13, Terahata discloses the method of claim 1, wherein displaying the user interface object at the first location in the three-dimensional environment includes displaying the user interface object in a reduced size state for presenting content [Figs. 2-3, 12, para 0064, 0067-0068, 0110, widget at initial state of center of user field of view displayed in non-selected status (read: reduced size state), note the limitation "for presenting content" is not being given patentable weight as the term "for" suggests or makes optional and does not require the step to be performed as the limitation is an intended result of the "reduced size state" as recited in the claim (see MPEP 2111.04), nevertheless note displaying widget in confirmed selected status includes expanding displayed widget buttons (read: content)].. As to claim 15, Terahata discloses the method of claim 1, wherein: displaying the user interface object at the second location in the three-dimensional environment in accordance with the determination that the change in the spatial arrangement of the viewpoint of the user relative to the three-dimensional environment satisfies the one or more criteria includes displaying an animation of the user interface object moving from the first location in the three-dimensional environment to the second location in the three-dimensional environment along a curved path [Figs. 2, 4, 10-11, para 0064, 0071, 0093, 0103-0105, move (read: animation) displayed widget to changed position in virtual space following horizontal axis direction (read: curved path) in virtual sphere when determining user movement moves widget within field of view where widget is considered inside the field of view]. As to claim 16, Terahata discloses the method of claim 1, further comprising: while displaying the user interface object in the three-dimensional environment, detecting, via the one or more input devices, a first input corresponding to movement of a portion of the user [Fig. 12, para 0051, 0099, 0109, display widget in virtual space and sensor detects position (read: first input) of user gaze]; and in response to detecting the first input: in accordance with a determination that the user interface object is in an expanded size state for presenting content and the movement of the portion of the user satisfies one or more second criteria, displaying, concurrently with the user interface object, a content control user interface object for controlling content being presented by the computer system that is associated with the user interface object [Fig. 12, para 0109-0110, 0114, system displays widget in selected status and expanded buttons (read: content control user interface object) when determining user gaze overlaps widget component in selected status (read: expanded size state) for predetermined time (read: second criteria), note the limitations "for presenting content" and "for controlling content" are not being given patentable weight as the term "for" suggests or makes optional and does not require the step to be performed as the limitation is an intended result of the "expanded size state" and "content control user interface object" as recited in the claim (see MPEP 2111.04), nevertheless note displaying widget in confirmed selected status includes expanding displayed widget buttons and button components perform user interface operation]; and in accordance with a determination that the user interface object is in a reduced size state for presenting the content and the movement of the portion of the user satisfies one or more third criteria, different from the one or more second criteria, displaying, concurrently with the user interface object, the content control user interface object [Fig. 12, para 0109-0110, 0113, display widget and widget component when determining widget in non-selected status (read: reduced size status) and user gaze overlaps widget (read: third criteria), note the limitation "for presenting content" is not being given patentable weight as the term "for" suggests or makes optional and does not require the step to be performed as the limitation is an intended result of the "reduced size state" as recited in the claim (see MPEP 2111.04), nevertheless note displaying widget in confirmed selected status includes expanding displayed widget buttons]. As to claim 17, Terahata discloses the method of claim 1, further comprising: while displaying the user interface object in the three-dimensional environment in an expanded size state, detecting, via the one or more input devices, a first input corresponding to displaying a home user interface of the computer system in the three-dimensional environment [Fig. 12, para 0109-0112, 0114, display widget in selected status (read: expanded size state) including expanded buttons (read: home user interface, note button components perform user interface operation and falls under broadest reasonable interpretation of home user interface including controlling computer settings consistent with Applicant's specification [para 0289]) and sensor detecting input selecting widget component in virtual space]; and in response to detecting the first input: displaying the home user interface of the computer system in the three-dimensional environment [Figs. 11-12, para 0099, 0110, display widget in selected status including expanded buttons in virtual space]; and displaying the user interface object in a reduced size state for presenting content at the first location in the three-dimensional environment [Fig. 12, para 0112-0113, display widget in non-selected status (read: reduced size state), note the limitation "for presenting content" is not being given patentable weight as the term "for" suggests or makes optional and does not require the step to be performed as the limitation is an intended result of the "reduced size state" as recited in the claim (see MPEP 2111.04), nevertheless note displaying widget in confirmed selected status includes expanding displayed widget buttons]. As to claim 18, Terahata and Knepper, combined at least for the reasons above, Terahata discloses a computer system that is in communication with a display generation component and one or more input devices, the computer system comprising: one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions [Fig. 13, para 0037, 0117-0118, system includes display, sensor, storage medium carrying instructions executed by processor] for: performing limitations substantially similar to those recited in claim 1 and is rejected under similar rationale. As to claim 19, Terahata and Knepper, combined at least for the reasons above, Terahata discloses a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of a computer system that is in communication with a display generation component and one or more input devices, cause the computer system to perform a method [Fig. 13, para 0037, 0117-0118, storage medium carrying instructions executed by processor of system including display and sensor] comprising: limitations substantially similar to those recited in claim 1 and is rejected under similar rationale. Claim 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Terahata and Knepper as applied to claim 1 above, and further in view of Berliner et al. (US 20220254120 A1). As to claim 14, Terahata discloses the method of claim 13, further comprising: while displaying the user interface object at the first location in the three-dimensional environment, detecting, via the one or more input devices, a first input corresponding to a request to present the content in an expanded size state, different from the reduced size state [Fig. 12, para 0109-0110, 0114, display widget in initial state position in virtual space and sensor determines user gaze (read: first input) overlaps widget to change widget from non-selected status to selected status (read: expanded size state) including expanding widget button components]; in response to detecting the first input, ceasing display of the user interface object in the reduced size state for presenting content at the first location in the three-dimensional environment and displaying the user interface object in the expanded size state for presenting content … in the three-dimensional environment, wherein displaying the user interface object in the expanded size state … in the three-dimensional environment includes displaying a first animation corresponding to the user interface object in the reduced size state transitioning into the user interface object in the expanded size state … [Figs. 11-12, para 0099, 0109-0110, 0133, change display of widget displaying buttons at initial state position in virtual space from (read: transitioning) non-selected state to widget in selected status in virtual space including expanding (read: first animation) widget and displaying button components when determining user gaze overlaps widget, note process steps may be altered]; while displaying the user interface object in the expanded size state for presenting content … in the three-dimensional environment, detecting, via the one or more input devices, a second input corresponding to a request to present the content in the reduced size state [Fig. 12, para 0109-0110, 0112-0114, display widget in selected status including expanded widget button component in virtual space and sensor determines user gaze input (read: second input) overlaps widget in selected status for predetermined time, where executing selected widget component displays widget in non-selected status]; and in response to detecting the second input, displaying the user interface object in the reduced size state at the first location in the three-dimensional environment, wherein displaying the user interface object in the reduced size state at the first location in the three-dimensional environment includes displaying an animation corresponding to the user interface object in the expanded size state transitioning into the user interface object in the reduced size state … [Figs. 11-12, para 0099, 0110-0113, 0133, change display form (read: animation) of widget at initial state position in virtual space in selected status to non-selected status when determining user gaze input for predetermined time, note process steps may be altered]. However, Terahata and Knepper do not specifically disclose displaying the user interface object in the expanded size state for presenting content at a third location, different from the first location, in the three-dimensional environment; transitioning into the user interface object in the expanded size state while concurrently moving from the first location toward the third location in the three-dimensional environment along a first curved path; and transitioning into the user interface object in the reduced size state while concurrently moving from the third location toward the first location in the three-dimensional environment along a second curved path. Berliner discloses: displaying the user interface object in the expanded size state for presenting content at a third location, different from the first location, in the three-dimensional environment [para 0252, 0259-0260, 0266, highlight virtual object including enlarging (read: expanded size state) and moving (read: third location) virtual object from virtual region position in extended reality environment (read: three-dimensional environment)]; transitioning into the user interface object in the expanded size state while concurrently moving from the first location toward the third location in the three-dimensional environment along a first curved path [para 0259-0260, 0266, 0388, highlight virtual object including enlarging and moving virtual object from virtual region position in extended reality environment in curved plane]; and transitioning into the user interface object in the reduced size state while concurrently moving from the third location toward the first location in the three-dimensional environment along a second curved path [para 0284-0285, 0371, 0388, unhighlight virtual object including returning virtual object size to prior size (read: reduced size state) and move virtual object back from previous moved position in curved plane]. Terahata, Knepper, and Berliner are analogous art to the claimed invention being from a similar field of endeavor of extended reality environments. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify displaying the user interface object in the expanded size state, transitioning into the user interface object in the expanded size state, and transitioning into the user interface object in the reduced size state as disclosed by Terahata and Knepper with the displaying an object in an expanded size state at a different location, transitioning into an expanded size state including moving an object along a curved path, and transitioning into a reduced size state including moving an object along a curved path as disclosed by Berliner with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify Terahata and Knepper as described above to provide user feedback during object selection [Berliner, para 0239]. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Ebstyne et al. (US 20150317831 A1) and Schoen (US 20230026638 A1) generally disclose transitioning user interface objects between world-locked and viewpoint-locked display states. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LINDA HUYNH whose telephone number is (571)272-5240 and email is linda.huynh@uspto.gov. The examiner can normally be reached M-F between 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571) 272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LINDA HUYNH/Primary Examiner, Art Unit 2172
Read full office action

Prosecution Timeline

Jan 24, 2024
Application Filed
Dec 27, 2025
Non-Final Rejection — §103, §112
Apr 08, 2026
Applicant Interview (Telephonic)
Apr 08, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578837
USER INTERFACES FOR MANAGING SHARING OF CONTENT IN THREE-DIMENSIONAL ENVIRONMENTS
2y 5m to grant Granted Mar 17, 2026
Patent 12547310
INFORMATION PROCESSING DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12541287
INTEGRATED ENERGY DATA SCIENCE PLATFORM
2y 5m to grant Granted Feb 03, 2026
Patent 12524136
EVENT TRANSCRIPT PRESENTATION
2y 5m to grant Granted Jan 13, 2026
Patent 12524124
RECORDING FOLLOWING BEHAVIORS BETWEEN VIRTUAL OBJECTS AND USER AVATARS IN AR EXPERIENCES
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
36%
Grant Probability
68%
With Interview (+31.9%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 274 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month