Prosecution Insights
Last updated: April 19, 2026
Application No. 18/123,837

Arranging Virtual Objects

Final Rejection §103§112
Filed
Mar 20, 2023
Examiner
SHAH, SUJIT
Art Unit
2624
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
6 (Final)
66%
Grant Probability
Favorable
7-8
OA Rounds
2y 8m
To Grant
77%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
269 granted / 408 resolved
+3.9% vs TC avg
Moderate +11% lift
Without
With
+11.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
37 currently pending
Career history
445
Total Applications
across all art units

Statute-Specific Performance

§101
2.3%
-37.7% vs TC avg
§103
65.4%
+25.4% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
16.1%
-23.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 408 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-6, 19-22 are amended. Claim 24 is newly added. Claims 8, 23 are cancelled. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 24 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 24 recites the limitation " wherein the region comprises the first two-dimensional virtual surface, the second two-dimensional virtual surface, and the space between the first two-dimensional virtual surface and the second two-dimensional virtual surface " in 1-3. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 5-6, 8-12, 16, 18-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over BAR-ZEEV et al (US Pub 2020/0225747) in view of Schwesinger et al (US Pub 2016/0162082) and Wright et al (US Pub 2016/0378294). With respect to claim 1, BAR-ZEEV discloses a method comprising (par 0002; discloses disclosure relates generally to user interfaces for interacting with an electronic device, and more specifically to interacting with an electronic device using an eye gaze ): at a device including an image sensor (par 0060; discloses system 100 uses image sensor(s) 108 to receive user inputs, such as hand gestures), a display, one or more processors, and a non-transitory memory (fig. 1A; system 100 includes display 120, processor 102 and memory 106): detecting a gaze input corresponding to a user focus location inside the region (see fig. 3 and fig. 4; par 0086; discloses While affordance 306 is selected, device 300 receives an input (e.g., an eye gesture, a body gesture, a voice input, a controller input, or a combination or portion thereof, such as the exemplary inputs described above). In the illustrated example, the input includes user 200 changing the position of his eyes such that his gaze direction moves on display 302 from location 308 to location 400 shown in FIG. 4.); selecting, based on the gaze input, an object placement location inside the region that is proximate to the user focus location; and displaying, on the display, a movement of the virtual object from an initial location outside the region to the object placement location inside the region (par 0086; discloses In response to receiving the input, device 300 performs an action associated with affordance 306 in accordance with the input. In the example illustrated in FIG. 4, device 300 moves affordance 306 in accordance with the change in the gaze direction of user 200, translating affordance 306 upward and to the left on display 302 from the location of affordance 306 shown in FIG. 3 to the location shown in FIG. 4; fig. 19Q-19S; discloses an example where a cup is moved from one region to another region i.e., a cup is moved from rectangular table (i.e. corresponding to outside region) to round table (i.e. corresponding to inside region). par 0139; discloses user 200 can use gaze 1906 to quickly and roughly designate an initial placement position and then make fine adjustments to the position that do not depend on gaze); BAR-ZEEV doesn’t expressly disclose detecting, in a series of images captured the image sensor, a gesture corresponding to a command to move a virtual object inside a region of an environment from an initial location outside the region, wherein the gesture includes a hand movement that extends in a direction toward the region; In the same field of endeavor, Schwesinger discloses system and method for identifying a target object based one eye tracking and gesture recognition (see abstract; par 0040; discloses the targeted object may be an object that the user intends to move, resize, select, or activate, for example); Schwesinger discloses detecting, in a series of images captured the image sensor, (par 0067; discloses At 106, video imaging a head and pointer of the user is received from the machine vision system. The video may include a series of time-resolved depth images from a depth camera and/or a series of time-resolved images from a flat-image camera) a gesture corresponding to a command to move a virtual object inside a region of an environment from an initial location outside the region, wherein the gesture includes a hand movement that extends in a direction toward the region (par 0041; discloses In the example of FIG. 1, pointer 80 is one of the user's fingers. The user, while gazing at targeted object 72′, positions the pointer to partly occlude the targeted object (from the user's own perspective). Fig. 1; discloses the hand gesture extends in a direction towards the region; Par 0045; discloses Once an object is targeted, user 12 may signal further action to be taken on the object. One or more of the NUI engines of compute system 16 may be configured to detect the user's intent to act on the targeted object. Par 0047; discloses the user may want to move a targeted (optionally selected) object to a different position on display 14. The user may signal this intent by maintaining gaze on the targeted object while moving the pointer up, down, or to the side. Pointer-projection engine 82 recognizes the change in pointer coordinates X.sub.pp, Y.sub.pp and directs the OS to move the targeted object to the changed coordinates); Schwesinger further discloses moving object from one region to another (par 0054; discloses In cases where a plurality of displays 14 are operatively coupled to compute system 16, user 12 may want to move or copy an object from one display to another); Therefore, it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by BAR-ZEEV to incorporate the teachings of Schwesinger to use hand gestures to provide a command to the system to act on the targeted object in order to allow user to control the targeted object in different ways, such as moving, resizing, selecting, or activating. The modification would allow user to perform different functions with the targeted object using simple hand gestures; BAR-ZEEV as modified by Schwesinger don’t expressly disclose detecting hand gestures comprises a movement that begins at a first location in the series of images and extends to a second location in the series of images of images; In the same field of endeavor, Wright discloses virtual display device and control method (see abstract); Wright discloses detecting hand gestures comprises a movement that begins at a first location in the series of images and extends to a second location in the series of images of images (par 0024; discloses in the example illustrated in FIG. 2, the head mounted display device 10 worn by the user 26 may be configured to detect motion of the user's hand. Based on a series of images captured by the optical sensor system 16, the head mounted display device 10 may determine whether motion of hand 38 of the user 26 is trackable. For example, the user's hand at positions 38 and 38A are within the field of view of the optical sensor system 16. Accordingly, motion of the user's hand moving from position 38 to position 38A over time T1 is trackable by the head mounted display device 10; par 0030; discloses if motion of the user's hand is currently trackable, the head mounted display device 10 may be configured to track motion of the hand in the images to identify a user hand gesture. In response to at least identifying the user hand gesture, the head mounted display device may execute a programmatic function that may be selected as described above); Therefore it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by BAR-ZEEV as modified by Schwesinger to incorporate the teachings of Wright to detect motion of the user hand from one location to another using series of images in order to accurately identify specific user gestures to provide the command to the device. With respect to claim 5, BAR-ZEEV as modified by Schwesinger and Wright discloses wherein the region is associated with a physical element in the environment (BAR-ZEEV; par 0074 discloses In some embodiments, affordance 306 is associated with a physical object (e.g., an appliance or other device that can be controlled via interaction with affordance 306)). With respect to claim 6, BAR-ZEEV as modified by Schwesinger and Wright discloses wherein the region is associated with a portion of the physical element (BAR-ZEEV; par 0138; discloses in accordance with user input 1910e including a first type of input (e.g., a touch on touch-sensitive surface 1904), device 1900 designates a tentative placement position for photo 1908c on wall 1916; see par 0064; fig. 1D; discloses a virtual hat display on user’s head). With respect to claim 9, BAR-ZEEV as modified by Schwesinger and Wright discloses further comprising obtaining a confirmation input confirming a selection of the user focus location (BAR-ZEEV; par 0080; discloses In response to receiving the confirming action, device 300 selects affordance 306. That is, affordance 306 is selected in response to the combination of the user looking at affordance 306 and providing a confirming action). With respect to claim 10, BAR-ZEEV as modified by Schwesinger and Wright discloses wherein the confirmation input comprises a gesture input (BAR-ZEEV; par 0081; discloses Non-limiting examples of a confirming action include an eye gesture, a body gesture, a voice input, a controller input, or a combination thereof.). With respect to claim 11, BAR-ZEEV as modified by Schwesinger and Wright discloses wherein the confirmation input comprises an audio input (BAR-ZEEV; par 0081; discloses Non-limiting examples of a confirming action include an eye gesture, a body gesture, a voice input, a controller input, or a combination thereof). With respect to claim 12, BAR-ZEEV as modified by Schwesinger and Wright discloses further comprising obtaining the confirmation input from a user input device (BAR-ZEEV; par 0081; discloses Non-limiting examples of a confirming action include an eye gesture, a body gesture, a voice input, a controller input, or a combination thereof). With respect to claim 16, BAR-ZEEV as modified by Schwesinger and Wright further comprising displaying a visual effect emanating from the object placement location (BAR-ZEEV; par 0086; discloses In the example illustrated in FIG. 4, device 300 moves affordance 306 in accordance with the change in the gaze direction of user 200, translating affordance 306 upward and to the left on display 302 from the location of affordance 306 shown in FIG. 3 to the location shown in FIG. 4.). With respect to claim 18, BAR-ZEEV as modified by Schwesinger and Wright discloses further comprising obtaining an object selection input corresponding to a user selection of the virtual object (BAR-ZEEV; par 0080; discloses In response to receiving the confirming action, device 300 selects affordance 306. That is, affordance 306 is selected in response to the combination of the user looking at affordance 306 and providing a confirming action. The confirming action is beneficial for preventing false positives (e.g., incorrect determinations by device 300 that user 200 desires to select or act upon affordance 306)). With respect to claim 19, BAR-ZEEV discloses a device comprising: an image sensor (par 0060; discloses system 100 uses image sensor(s) 108 to receive user inputs, such as hand gestures); a display (fig. 1A; system 100 includes display 120); one or more processors; a non-transitory memory; and one or more programs stored in the non-transitory memory, which, when executed by the one or more processors, cause the device to: (fig. 1A; system 100 includes processor 102 and memory 106; par 0121; discloses executable instructions for performing the features of methods 1600, 1700, and/or 1800 described above are, optionally, included in a transitory or non-transitory computer-readable storage medium (e.g., memory(ies) 106) or other computer program product configured for execution by one or more processors (e.g., processor(s) 102).); detect a gaze input corresponding to a user focus location inside the region (see fig. 3 and fig. 4; par 0086; discloses While affordance 306 is selected, device 300 receives an input (e.g., an eye gesture, a body gesture, a voice input, a controller input, or a combination or portion thereof, such as the exemplary inputs described above). In the illustrated example, the input includes user 200 changing the position of his eyes such that his gaze direction moves on display 302 from location 308 to location 400 shown in FIG. 4.); select, based on the gaze input, an object placement location inside the region that is proximate to the user focus location; and display, on the display, a movement of the virtual object from the initial location outside the region to the object placement location inside the region (par 0086; discloses In response to receiving the input, device 300 performs an action associated with affordance 306 in accordance with the input. In the example illustrated in FIG. 4, device 300 moves affordance 306 in accordance with the change in the gaze direction of user 200, translating affordance 306 upward and to the left on display 302 from the location of affordance 306 shown in FIG. 3 to the location shown in FIG. 4; fig. 19Q-19S; discloses an example where a cup is moved from one region to another region i.e., a cup is moved from rectangular table (i.e. corresponding to outside region) to round table (i.e. corresponding to inside region)); BAR-ZEEV doesn’t expressly disclose detect, in a series of images captured the image sensor, a gesture corresponding to a command to move a virtual object inside a region of an environment from an initial location outside the region, wherein the gesture includes a hand movement that extends in a direction toward the region; In the same field of endeavor, Schwesinger discloses system and method for identifying a target object based one eye tracking and gesture recognition (see abstract; par 0040; discloses the targeted object may be an object that the user intends to move, resize, select, or activate, for example); Schwesinger discloses detect, in a series of images captured the image sensor, (par 0067; discloses At 106, video imaging a head and pointer of the user is received from the machine vision system. The video may include a series of time-resolved depth images from a depth camera and/or a series of time-resolved images from a flat-image camera) a gesture corresponding to a command to move a virtual object inside a region of an environment from an initial location outside the region, wherein the gesture includes a hand movement that extends in a direction toward the region (par 0041; discloses In the example of FIG. 1, pointer 80 is one of the user's fingers. The user, while gazing at targeted object 72′, positions the pointer to partly occlude the targeted object (from the user's own perspective). Fig. 1; discloses the hand gesture extends in a direction towards the region; Par 0045; discloses Once an object is targeted, user 12 may signal further action to be taken on the object. One or more of the NUI engines of compute system 16 may be configured to detect the user's intent to act on the targeted object. Par 0047; discloses the user may want to move a targeted (optionally selected) object to a different position on display 14. The user may signal this intent by maintaining gaze on the targeted object while moving the pointer up, down, or to the side. Pointer-projection engine 82 recognizes the change in pointer coordinates X.sub.pp, Y.sub.pp and directs the OS to move the targeted object to the changed coordinates); Schwesinger further discloses moving object from one region to another (par 0054; discloses In cases where a plurality of displays 14 are operatively coupled to compute system 16, user 12 may want to move or copy an object from one display to another); Therefore it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by BAR-ZEEV to incorporate the teachings of Schwesinger to use hand gestures to provide a command to the system to act on the targeted object in order to allow user to control the targeted object in different ways, such as moving, resizing, selecting, or activating. The modification would allow user to perform different functions with the targeted object using simple hand gestures; BAR-ZEEV as modified by Schwesinger don’t expressly disclose detecting hand gestures comprises a movement that begins at a first location in the series of images and extends to a second location in the series of images of images; In the same field of endeavor, Wright discloses virtual display device and control method (see abstract); Wright discloses detecting hand gestures comprises a movement that begins at a first location in the series of images and extends to a second location in the series of images of images (par 0024; discloses in the example illustrated in FIG. 2, the head mounted display device 10 worn by the user 26 may be configured to detect motion of the user's hand. Based on a series of images captured by the optical sensor system 16, the head mounted display device 10 may determine whether motion of hand 38 of the user 26 is trackable. For example, the user's hand at positions 38 and 38A are within the field of view of the optical sensor system 16. Accordingly, motion of the user's hand moving from position 38 to position 38A over time T1 is trackable by the head mounted display device 10; par 0030; discloses if motion of the user's hand is currently trackable, the head mounted display device 10 may be configured to track motion of the hand in the images to identify a user hand gesture. In response to at least identifying the user hand gesture, the head mounted display device may execute a programmatic function that may be selected as described above); Therefore it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by BAR-ZEEV as modified by Schwesinger to incorporate the teachings of Wright to detect motion of the user hand from one location to another using series of images in order to accurately identify specific user gestures to provide the command to the device. With respect to claim 20, BAR-ZEEV discloses a non-transitory memory storing one or more programs, which, when executed by one or more processors of a device, including an image sensor (par 0060; discloses system 100 uses image sensor(s) 108 to receive user inputs, such as hand gestures);and a display (fig. 1A; system 100 includes display 120); cause the device to: (fig. 1A; system 100 includes processor 102 and memory 106; par 0121; discloses executable instructions for performing the features of methods 1600, 1700, and/or 1800 described above are, optionally, included in a transitory or non-transitory computer-readable storage medium (e.g., memory(ies) 106) or other computer program product configured for execution by one or more processors (e.g., processor(s) 102).); detect a gaze input corresponding to a user focus location inside the region (see fig. 3 and fig. 4; par 0086; discloses While affordance 306 is selected, device 300 receives an input (e.g., an eye gesture, a body gesture, a voice input, a controller input, or a combination or portion thereof, such as the exemplary inputs described above). In the illustrated example, the input includes user 200 changing the position of his eyes such that his gaze direction moves on display 302 from location 308 to location 400 shown in FIG. 4.); select, based on the gaze input, an object placement location inside the region that is proximate to the user focus location; and display, on the display, a movement of the virtual object from the initial location outside the region to the object placement location inside the region (par 0086; discloses In response to receiving the input, device 300 performs an action associated with affordance 306 in accordance with the input. In the example illustrated in FIG. 4, device 300 moves affordance 306 in accordance with the change in the gaze direction of user 200, translating affordance 306 upward and to the left on display 302 from the location of affordance 306 shown in FIG. 3 to the location shown in FIG. 4; fig. 19Q-19S; discloses an example where a cup is moved from one region to another region i.e., a cup is moved from rectangular table (i.e. corresponding to outside region) to round table (i.e. corresponding to inside region)); BAR-ZEEV doesn’t expressly disclose detect, in a series of images captured the image sensor, a gesture corresponding to a command to move a virtual object inside a region of an environment from an initial location outside the region, wherein the gesture includes a hand movement that extends in a direction toward the region; In the same field of endeavor, Schwesinger discloses system and method for identifying a target object based one eye tracking and gesture recognition (see abstract; par 0040; discloses the targeted object may be an object that the user intends to move, resize, select, or activate, for example); Schwesinger discloses detect, in a series of images captured the image sensor, (par 0067; discloses At 106, video imaging a head and pointer of the user is received from the machine vision system. The video may include a series of time-resolved depth images from a depth camera and/or a series of time-resolved images from a flat-image camera) a gesture corresponding to a command to move a virtual object inside a region of an environment from an initial location outside the region, wherein the gesture includes a hand movement that extends in a direction toward the region (par 0041; discloses In the example of FIG. 1, pointer 80 is one of the user's fingers. The user, while gazing at targeted object 72′, positions the pointer to partly occlude the targeted object (from the user's own perspective). Fig. 1; discloses the hand gesture extends in a direction towards the region; Par 0045; discloses Once an object is targeted, user 12 may signal further action to be taken on the object. One or more of the NUI engines of compute system 16 may be configured to detect the user's intent to act on the targeted object. Par 0047; discloses the user may want to move a targeted (optionally selected) object to a different position on display 14. The user may signal this intent by maintaining gaze on the targeted object while moving the pointer up, down, or to the side. Pointer-projection engine 82 recognizes the change in pointer coordinates X.sub.pp, Y.sub.pp and directs the OS to move the targeted object to the changed coordinates); Schwesinger further discloses moving object from one region to another (par 0054; discloses In cases where a plurality of displays 14 are operatively coupled to compute system 16, user 12 may want to move or copy an object from one display to another); Therefore it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by BAR-ZEEV to incorporate the teachings of Schwesinger to use hand gestures to provide a command to the system to act on the targeted object in order to allow user to control the targeted object in different ways, such as moving, resizing, selecting, or activating. The modification would allow user to perform different functions with the targeted object using simple hand gestures; BAR-ZEEV as modified by Schwesinger don’t expressly disclose detecting hand gestures comprises a movement that begins at a first location in the series of images and extends to a second location in the series of images of images; In the same field of endeavor, Wright discloses virtual display device and control method (see abstract); Wright discloses detecting hand gestures comprises a movement that begins at a first location in the series of images and extends to a second location in the series of images of images (par 0024; discloses in the example illustrated in FIG. 2, the head mounted display device 10 worn by the user 26 may be configured to detect motion of the user's hand. Based on a series of images captured by the optical sensor system 16, the head mounted display device 10 may determine whether motion of hand 38 of the user 26 is trackable. For example, the user's hand at positions 38 and 38A are within the field of view of the optical sensor system 16. Accordingly, motion of the user's hand moving from position 38 to position 38A over time T1 is trackable by the head mounted display device 10; par 0030; discloses if motion of the user's hand is currently trackable, the head mounted display device 10 may be configured to track motion of the hand in the images to identify a user hand gesture. In response to at least identifying the user hand gesture, the head mounted display device may execute a programmatic function that may be selected as described above); Therefore it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by BAR-ZEEV as modified by Schwesinger to incorporate the teachings of Wright to detect motion of the user hand from one location to another using series of images in order to accurately identify specific user gestures to provide the command to the device. With respect to claim 21, BAR-ZEEV as modified by Schwesinger and Wright discloses wherein the gesture is a flinging gesture (BAR-ZEEV; par 0082; discloses Examples of a hand gesture include placement of a hand at a location corresponding to the location of affordance 306 (e.g., between the user and the display of affordance 306), a wave, a pointing motion (e.g., at affordance 306), or a gesture with a predefined motion pattern). Claim(s) 2-4, 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over BAR-ZEEV et al (US Pub 2020/0225747) in view of Schwesinger et al (US Pub 2016/0162082), Wright et al (US Pub 2016/0378294) and Algreatly (US Pub 2015/0277699). With respect to claim 2, BAR-ZEEV as modified by Schwesinger and Wright don’t expressly disclose further comprising displaying, on the display, the region, wherein displaying the region comprises displaying a first two-dimensional virtual surface enclosed by a boundary; In the same field of endeavor, Algreatly discloses head mounted display device and interaction method with the virtual environment (see abstract); Algreatly discloses further comprising displaying, on the display, the region, wherein displaying the region comprises displaying a first two-dimensional virtual surface enclosed by a boundary; (Algreatly; fig. 9; discloses a first two-dimensional window A enclosed by a boundary); Therefore it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by BAR-ZEEV as modified by Schwesinger and Wright to incorporate the teachings of Algreatly to display plurality of virtual windows in the environment such that various different contents such as static/dynamic contents may be displayed in the designated virtual windows; With respect to claim 3, BAR-ZEEV as modified by Schwesinger, Wright and Algreatly discloses wherein displaying the region further comprises displaying a second two-dimensional virtual surface substantially parallel to the first two- dimensional virtual surface (Algreatly; fig. 9; discloses a second two-dimensional window E enclosed by a boundary that is parallel to the first two-dimensional window A); With respect to claim 4, BAR-ZEEV as modified by Schwesinger, Wright and Algreatly discloses further comprising displaying the virtual object on at least one of the first two-dimensional virtual surface or the second two-dimensional virtual surface (Algreatly; par 0040; discloses FIG. 1 illustrates an example of virtual data presented in a first window 110, second window 120, and third window 130 on an OHMD such as GOOGLE GLASS. The virtual data in each window contains text, image, video, or the like). With respect to claim 24, BAR-ZEEV as modified by Schwesinger and Wright don’t expressly disclose wherein the region comprises the first two-dimensional virtual surface, the second two-dimensional virtual surface, and the space between the first two-dimensional virtual surface and the second two-dimensional virtual surface; In the same field of endeavor, Algreatly discloses head mounted display device and interaction method with the virtual environment (see abstract); Algreatly discloses wherein the region comprises the first two-dimensional virtual surface, the second two-dimensional virtual surface, and the space between the first two-dimensional virtual surface and the second two-dimensional virtual surface; (fig. 6, fig. 8; discloses the region comprises plurality of virtual windows and fig. 9; discloses at least two of the plurality of windows 150 are parallel to each other and spaced apart from each other); Therefore it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by BAR-ZEEV as modified by Schwesinger and Wright to incorporate the teachings of Algreatly to display plurality of virtual windows in the environment such that various different contents such as static/dynamic contents may be displayed in the designated virtual windows; Claim(s) 13-15, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over BAR-ZEEV et al (US Pub 2020/0225747) in view of Schwesinger et al (US Pub 2016/0162082), Wright et al (US Pub 2016/0378294) and CHIU (US Pub 2018/0117470). With respect to claim 13, BAR-ZEEV as modified by Schwesinger and Wright doesn’t expressly disclose further comprising determining the object placement location based on a location of a second virtual object in the environment; CHIU further discloses determining the object placement location based on a location of a second virtual object in the environment (par 0050; discloses the processing components 110 may modify the first virtual position or the second virtual position to avoid the collision between the first virtual object VOB1 and the second virtual object VOB2 in the virtual space VSP.); Therefore it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by BAR-ZEEV as modified by Schwesinger and Wright to adjust object location of the first virtual object as disclosed by CHIU in order to prevent any overlapping of plurality objects. With respect to claim 14 BAR-ZEEV as modified by Schwesinger, Wright and CHIU further discloses wherein the object placement location is at least a threshold distance away from the location of the second virtual object (CHIU; par 0051; discloses he first physical position of the first physical object POB1 may be acquired from the first physical object POB1 or a device for positioning the first physical object POB1. In one embodiment, the initial distance Di is a predefined distance to separate the first virtual object VOB1 and the second virtual object VOB2 in the virtual space VSP, such that the first virtual object VOB1 and the second virtual object VOB2 will not be collided or overlapped to each other in the virtual space VSP). With respect to claim 15, BAR-ZEEV as modified by Schwesinger, Wright and CHIU further discloses wherein the threshold distance is based on at least one of a dimension or a boundary of at least one of the first virtual object or the second virtual object (CHIU; par 0051; discloses the predefined distance is to prevent any overlapping of objects; hence threshold distance is based on the boundary of the at least one of the first virtual object or the second virtual object; see fig. 3C as well). With respect to claim 17, BAR-ZEEV as modified by Schwesinger and Wright discloses further comprising displaying a movement of a second virtual object (BAR-ZEEV; fig. 19B-fig. 19C; discloses stack of pictures 1908 are moved from table 1912 and presented upright and spread out in the middle of the field of view of user 200); BAR-ZEEV as modified by Schwesinger and Wright don’t expressly disclose moving a second virtual object within a threshold distance of the object placement location; CHIU further discloses movement of a second virtual object within a threshold distance of the object placement location; (CHIU; par 0051; discloses he first physical position of the first physical object POB1 may be acquired from the first physical object POB1 or a device for positioning the first physical object POB1. In one embodiment, the initial distance Di is a predefined distance to separate the first virtual object VOB1 and the second virtual object VOB2 in the virtual space VSP, such that the first virtual object VOB1 and the second virtual object VOB2 will not be collided or overlapped to each other in the virtual space VSP); Therefore it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by BAR-ZEEV as modified by Schwesinger and Wright to adjust object location of the first virtual object as disclosed by CHIU in order to prevent any overlapping of plurality objects. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over BAR-ZEEV et al (US Pub 2020/0225747) in view of Schwesinger et al (US Pub 2016/0162082), Wright et al (US Pub 2016/0378294) and DAI et al (US Pub 2019/0362559). With respect to claim 7, BAR-ZEEV as modified by Schwesinger and Wright discloses changing the size of the virtual object (BAR-ZEEV; par 0087; discloses In addition to moving an affordance, exemplary actions include transforming the affordance or a representation of an object associated with the affordance (e.g., rotating, twisting, stretching, compressing, enlarging, and/or shrinking affordance 306)); BAR-ZEEV as modified by Schwesinger and Wright don’t expressly disclose further comprising determining a display size of the virtual object as a function of a size of the physical element; In the same field of endeavor, DAI discloses augmented reality method for displaying virtual objects where Dai discloses determining a display size of the virtual object as a function of a size of the physical element (par 0045; discloses the terminal device may calculate a scaling ratio according to a ratio of the virtual object to the target region, and the virtual object to be displayed can be adjusted with the scaling ratio); Therefore it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by BAR-ZEEV as modified by Schwesinger and Wright to adjust the size of the virtual objects with respect to the size of the physical object as disclosed by Dai in order to properly present the virtual images in accordance with the physical environment providing user with better augmented reality system. Allowable Subject Matter Claims 22 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. With respect to claim 22, the cited references fails to disclose wherein the first two-dimensional surface corresponds to a graphical user interface for a messaging application that includes a messages area for displaying sent or received messages and an input field for entering a message to be sent; wherein the virtual object includes an image that the user flings towards the graphical user interface for the messaging application; and wherein displaying the movement of the virtual object comprises placing a reduced- size version of the image in the input field. Response to Arguments Applicant’s arguments with respect to claim(s) 1, 19-20 have been considered however they are moot because arguments do not apply to new reference being used in the current rejection. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SUJIT SHAH whose telephone number is (571)272-5303. The examiner can normally be reached Monday-Friday, 9:00 am-6:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached at (571)270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SUJIT SHAH/Examiner, Art Unit 2624
Read full office action

Prosecution Timeline

Mar 20, 2023
Application Filed
Nov 28, 2023
Non-Final Rejection — §103, §112
Apr 03, 2024
Response Filed
Apr 03, 2024
Examiner Interview Summary
Apr 03, 2024
Applicant Interview (Telephonic)
Apr 24, 2024
Final Rejection — §103, §112
Aug 19, 2024
Applicant Interview (Telephonic)
Aug 19, 2024
Examiner Interview Summary
Aug 20, 2024
Request for Continued Examination
Aug 21, 2024
Response after Non-Final Action
Sep 17, 2024
Non-Final Rejection — §103, §112
Jan 30, 2025
Applicant Interview (Telephonic)
Jan 30, 2025
Examiner Interview Summary
Jan 30, 2025
Response Filed
Mar 27, 2025
Final Rejection — §103, §112
Aug 01, 2025
Interview Requested
Aug 08, 2025
Applicant Interview (Telephonic)
Aug 08, 2025
Examiner Interview Summary
Sep 03, 2025
Request for Continued Examination
Sep 08, 2025
Response after Non-Final Action
Oct 02, 2025
Non-Final Rejection — §103, §112
Dec 05, 2025
Examiner Interview Summary
Dec 05, 2025
Applicant Interview (Telephonic)
Dec 15, 2025
Response Filed
Mar 06, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603027
DISPLAY PANEL AND DRIVING METHOD THEREOF, AND DISPLAY APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12596514
CONTROL METHOD AND CONTROL DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12592177
FOVEATED DISPLAY BURN-IN STATISTICS AND BURN-IN COMPENSATION SYSTEMS AND METHODS
2y 5m to grant Granted Mar 31, 2026
Patent 12572234
DISPLAY DEVICE AND METHOD OF DRIVING THE SAME
2y 5m to grant Granted Mar 10, 2026
Patent 12567367
Display Device and Pixel Sensing Method Thereof
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
66%
Grant Probability
77%
With Interview (+11.4%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 408 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month