Prosecution Insights
Last updated: April 19, 2026
Application No. 18/229,312

WEARABLE ELECTRONIC DEVICE DISPLAYING VIRTUAL OBJECT AND METHOD FOR CONTROLLING THE SAME

Non-Final OA §103
Filed
Aug 02, 2023
Examiner
BEUTEL, WILLIAM A
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
3 (Non-Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
90%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
328 granted / 469 resolved
+7.9% vs TC avg
Strong +20% interview lift
Without
With
+20.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
28 currently pending
Career history
497
Total Applications
across all art units

Statute-Specific Performance

§101
9.9%
-30.1% vs TC avg
§103
49.8%
+9.8% vs TC avg
§102
10.7%
-29.3% vs TC avg
§112
22.0%
-18.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 469 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/26/2026 has been entered. Response to Amendment Claim 21 has been canceled and as such all rejections to the claim have been withdrawn as moot. Claim 22 is newly added and currently under consideration. Response to Arguments Applicant’s arguments, see applicant’s correspondence, filed 1/26/2026, with respect to the rejection(s) of claim(s) 1 and 11, and claims dependent thereon, under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Mene et al. Claim Objections Claim 22 is objected to because of the following informalities: Claim 22 fails to include a period to indicate the end of the claim. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 10, 11-14 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Burns et al. (US 2020/0082632 A1) in view of Mene et al. (US 2023/0169695 A1). Regarding claim 1, Burns discloses: A wearable electronic device (Burns, ¶26: HMD; ¶33: device 10 is head-mounted display – Fig. 2) comprising: a display (Burns, ¶26: pass through or translucent display); a camera (Burns, ¶33: HMD using cameras; ¶64: camera included in HMD); and memory storing instructions; (Burns, ¶60: memory including programs) at least one processor (Burns, Fig. 6 and ¶56: device 10 includes processing units 142); wherein the instructions, when executed by the at least one processor individually or collectively, cause the wearable device to (Burns, ¶60: program modules; ¶63: processor performing operations by executing code stored in memory): based on an image obtained through the camera, (Burns, ¶64: obtain image data representing physical environment using image sensor/captured by camera included in HMD) analyze a real space, (Burns, ¶67: detects an attribute in the physical environment depicted in the view based on the location of the virtual element in the view) based on a result of the analyzed real space, obtain a plurality of areas where a virtual object is displayable, (Burns, ¶67: identify walls, ceilings, floors, tables, ramps, planar surfaces, curved surfaces, round surfaces, textured surfaces, surfaces of particular colors or textures, etc. In some implementations, detecting the attribute includes identifying a classification of a portion of the physical environment depicted in the view using a classifier or identifying an amount of the portion upon which the virtual element can be positioned) based on receiving a first user input for displaying the virtual object, (Burns, ¶66: the change may include placement of the virtual element at a location in the view relative to the physical environment depicted in the view. Moreover, the change may be received by the device via input positioning the virtual element at the location.) identify a first area among a plurality of areas (Burns, ¶35: “The modality of a virtual element in the content is based on the location of the virtual element relative to the content. In some implementations, the modality of a virtual element is determined when the virtual element is first inserted into the content, e.g., based on the initial position of the virtual element”) identify a shape of the first area (Burns, ¶67: detecting the attribute includes detecting a surface, an adjacent surface or element, or a type of surface upon which the virtual element is positioned in the physical environment depicted in the view, e.g., determining whether the virtual element is positioned in free space or on a floor, table, wall, or ceiling . . . a machine-learning model is trained to identify walls, ceilings, floors, tables, ramps, planar surfaces, curved surfaces, round surfaces, textured surfaces, surfaces of particular colors or textures, etc.), identify a first shape of the virtual object corresponding to the shape of the first area (Burns, ¶68: method 700 selects a modality (e.g., an appearance, function, or interactivity) of the virtual element based on the attribute – note ¶68 discussing attributes of surface; ¶42: FIG. 1 illustrates how the modality of the virtual element depends on its location relative to one or more attributes (e.g., being on a surface, vertical surface, a horizontal surface, a wall, a floor, a ceiling, a table, in mid-air, etc.) of the CGR environment 15. In this example, the modality of the virtual element depends on whether the virtual element is positioned on end table 25, on wall 30, on floor 35, or in open 3D space 40. ¶43: The modalities, e.g., appearances, functions, and interactive features, of the virtual element can be configured by the virtual element creator, for example, who may create different modality state definitions for each of multiple positional states (e.g., on horizontal surface, on vertical surface, in mid-air, etc.) associated with multiple CGR content attributes (e.g., surfaces, horizontal surfaces, vertical surfaces, walls, floors, tables, ceilings, etc.). Figs. 1-4 and ¶48: As with the examples of FIGS. 1 and 2, the CGR environment 15 depicted in the examples of FIGS. 3 and 4, includes a virtual element that has different modalities when placed in different positions relative to attributes of the CGR content. The modalities, e.g., appearances, functions, and interactive features, of the virtual element can be configured by the virtual element creator, for example, who may create different modality state definitions for each of multiple positional states (e.g., on horizontal surface, on vertical surface, in mid-air, etc.) associated with multiple CGR content attributes (e.g., surfaces, horizontal surfaces, vertical surfaces, walls, floors, tables, ceilings, etc.). ), identify at least one function related to the virtual object and corresponding to the shape of the first area (Burns, ¶68: method 700 selects a modality (e.g., an appearance, function, or interactivity) of the virtual element based on the attribute – note ¶68 discussing attributes of surface; ¶¶42-43 and 48), and display the virtual object of the first shape comprising at least one icon corresponding to the at least one function. (Burns, ¶69: the method 700 updates the view on the display of the device such that the view includes the virtual element according to the selected modality; Figs. 1-4 and ¶¶44-46, e.g. based on the virtual element being positioned on the wall 30 and thus associated with a vertical surface attribute, the virtual element is displayed in the vertical surface modality 45 (e.g., as a weather sign), and ¶50-52: virtual element may provide modality-specific functions and interactive features, such as face of clock etc.; For example, if a user positions the virtual element near a horizontal surface of table 75 in CGR environment 15, the virtual element may be displayed in a horizontal surface modality 90. In the horizontal surface modality 90, the virtual element has the appearance of a clock radio) Burns does not explicitly disclose the identifying a first area meeting a set condition, comprising one of a largest area among the plurality of areas, an area where the virtual object is disposed statistically most frequently, an area where there is a history for a user to have disposed the corresponding virtual object most frequently, or an area with a highest relation with an application corresponding to the virtual object. Mene however discloses: for displaying the virtual object, identify a first area meeting a set condition among the plurality of areas, wherein the set condition comprises one of: an area where the virtual object is disposed statistically most frequently, an area where there is a history for a user to have disposed the corresponding virtual object most frequently, or an area with a highest relation with an application corresponding to the virtual object. (Note the limitations are recited as alternatives and only one of which is required. Mene, ¶22: AU interface program determines devices warn and carried by user, including time and geolocation data during a user’s activities, and generating usage data of when and where a user frequently uses a wearable device with AR device, including monitoring and identifying when wearable devices 120a-n are used in conjunction with other tasks and operations of both AR device 110 and other wearable devices 120a-n, e.g. “Call” operation of the smartphone co-occurs frequently with connecting the speaker of the smartwatch; ¶¶23-24: determine relative positions of AR device and wearable device, including 3D vector map of positions for AR device and wearable device; ¶26 further discloses determining historic patterns of usage of AR interface with wearable devices; ¶28: When usage data 115 and position data 116 indicate that a wearable device is missing or otherwise not in an expected position based on historic usage, AR interface program 112 generates a virtual interface overlay at the expected position, relative to the display driver of AR device 110 and the user's augmented view, providing a virtual interface that mimic some or all of the functionality of the wearable device - This section of Mene teaches the condition of an area with a highest relation with an application corresponding to the virtual object, e.g. phone call of wearable device, and displaying at that location; In addition, the teachings would render the display of augmented reality at locations based on user history of use, as the system coordinates a user’s history of usage and location with the placement of a virtual object; although the usage and history aspect is based on location of a real object, i.e. wearable device, this is merely a substitution of parts, as Mene includes coordinating previous use and location in virtual space – i.e. position data within the 3D vector map of positions for coordinating the overlay of the AR virtual object.) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the augmented reality user interface for relocating and changing virtual objects as provided by Burns, with the technique of identifying a first area meeting a condition for a virtual object as provided by Mene, using known electronic interfacing and programming techniques. The modification results in an improved placement of a virtual object in augmented reality for easier usability, and by better accounting for user context, such as placing the object in an expected location for user to more easily find and use the object, to avoid unnecessary searching of an environment by the user etc. Regarding claim 11, the device of claim 1 performs the method of claim 11 and as such claim 11 is rejected based on the same rationale as claim 1 set forth above. Regarding claim 2, Burns further discloses: based on receiving a second user input to move the virtual object of the first shape to a second area that is different from the first area, replace the virtual object of the first shape displayed in the first area with a virtual object of a second shape that is different from the first shape corresponding to the shape of the second area. (Burns, ¶35: The modality of a virtual element in the content is based on the location of the virtual element relative to the content. In some implementations, the modality of a virtual element is determined when the virtual element is first inserted into the content, e.g., based on the initial position of the virtual element. In some implementations, the modality of a virtual element is determined and changed when the virtual element is moved within the content; ¶39: When a user of device 10 places or repositions a virtual element within a CGR environment 15 or changes the CGR environment 15 around the virtual element, the modality of the virtual element changes or adapts; Figs. 1-4, ¶¶44-46, ¶50-52 and ¶69) Regarding claim 12, the device of claim 2 performs the method of claim 12 and as such claim 12 is rejected based on the same rationale as claim 2 set forth above. Regarding claim 3, Burns further discloses: wherein at least one or more icons included in the virtual object of the second shape or type of a function corresponding to the virtual object of the second shape is different from that of the virtual object of the first shape. (Burns, Figs. 1-4 and ¶43: The modalities, e.g., appearances, functions, and interactive features, of the virtual element can be configured by the virtual element creator, for example, who may create different modality state definitions for each of multiple positional states (e.g., on horizontal surface, on vertical surface, in mid-air, etc.) associated with multiple CGR content attributes (e.g., surfaces, horizontal surfaces, vertical surfaces, walls, floors, tables, ceilings, etc.).; ¶¶44-45 discusses various functionality and virtual element features displayed based on surface modality – see Fig. 1) Regarding claim 13, the device of claim 3 performs the method of claim 13 and as such claim 13 is rejected based on the same rationale as claim 3 set forth above. Regarding claim 4, Burns further discloses: wherein an icon corresponding to a first function included in the virtual object of the first shape is a 2D icon, and an icon corresponding to the first function included in the virtual object of the second shape is a 3D icon. (Burns, Fig. 1 and ¶¶45-46: In contrast to the 2D appearance of the vertical surface modality 45, the open space modality 50 has a 3D appearance, e.g. the open space modality 50 provides a 3D representation of the current weather or predicted weather conditions in the user's current geographic location, e.g., displaying a floating sun, a rain cloud, a tornado, etc.; Note ¶¶41-42 further explains Fig. 1 modality based on location) Regarding claim 14, the device of claim 4 performs the method of claim 14 and as such claim 14 is rejected based on the same rationale as claim 4 set forth above. Regarding claim 10, Burns further discloses: Wherein the first area has a shape corresponding to a bottom surface, a shape corresponding to a wall surface, or a shape corresponding to the virtual object being disposed mid-air and wherein the second area has a shape different from the first area (Burns, Fig. 1-4 and ¶¶44-46, shape shown based on wall surface or in air) Regarding claim 20, the device of claim 10 performs the method of claim 20 and as such claim 20 is rejected based on the same rationale as claim 10 set forth above. Claim(s) 5 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Burns et al. (US 2020/0082632 A1) in view of Mene et al. (US 2023/0169695 A1) and in further view of Berliner et al. (US 2022/0253093 A1). Regarding claim 5, the limitations included from claim 2 are rejected based on the same rationale as claim 2 set forth above. Further regarding claim 5, Burns further discloses: wherein the at least one processor is further configured to: based on receiving a third user input for adding the icon related to a second function (Burns, ¶35: In some implementations, the modality of a virtual element is determined when the virtual element is first inserted into the content, e.g., based on the initial position of the virtual element; ¶44: the vertical surface modality 45 of the virtual elements displays a 2D image or sign representing the weather and the high and low predicted daily temperatures; ¶46: The virtual element in this example is positioned relative to a horizontal flat surface of depiction of a real-world end table 25. Based on this location, the virtual element is displayed in a horizontal surface modality 55, e.g., as a decorative snow globe in which the current or predicted weather is displayed in a 3D manner within the globe; i.e. icon either 2D or 3D based on inserted position, where user places virtual element; See ¶39: “When a user of device 10 places or repositions a virtual element within a CGR environment 15 or changes the CGR environment 15 around the virtual element, the modality of the virtual element changes or adapts” such that input for adding based on different locations are inputs; ¶50 also discloses user can customize modalities for how virtual element is displayed, user presented with modality-specific display options) Burns does not explicitly disclose adding an icon to the objects based on user input as claimed. Adding icons related to functions to virtual objects based on user input, however, would have been known at the time of the effective filing date of the claimed invention. Berliner discloses: based on receiving a user input for adding the icon related to a function to the virtual object, add an icon related to the function to the virtual object of the shape, (Berliner, ¶550: controlling virtual display includes adding or deleting elements from the display; ¶746: docking at least one virtual object to the virtual display, includes adding the virtual object to the data-structure of virtual objects docked to the virtual display, or “docking a first virtual object to a second virtual object may include adding the first virtual object to a data-structure of virtual objects docked to the second virtual object (such as a list, a set, a database, and so forth)” Fig. 65 and ¶795: virtual objects docked to positions in a virtual plane, where virtual objects 6512 associated with first virtual plane 6510) The combination of the references teaches the full limitation of the claim, wherein the at least one processor is further configured to: based on receiving a third user input for adding an icon related to a second function to the virtual object of the first shape, add a 2D icon related to the second function to the virtual object of the first shape, and based on receiving a fourth user input for adding the 2D icon related to the second function to the virtual object of the second shape, add a 3D icon related to the second function to the virtual object of the second shape. Both Burns and Berliner are directed user interfaces for user manipulation of virtual objects within augmented reality systems. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the augmented reality user interface for relocating and changing virtual objects as provided by Burns, with the technique of identifying a first area meeting a condition for a virtual object as provided by Mene, with the technique of docking virtual objects to other virtual objects as provided by Berliner, using known electronic interfacing and programming techniques. The modification results in an improved augmented reality user interface by allowing a user to associate functional objects together to build a tailored interface that better suits the user’s needs, for greater flexibility and usage of an augmented reality user interface. Regarding claim 15, the device of claim 5 performs the method of claim 15 and as such claim 15 is rejected based on the same rationale as claim 5 set forth above. Claim(s) 6-7 and 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Burns et al. (US 2020/0082632 A1) in view of Mene et al. (US 2023/0169695 A1) and in further view of Schwarz et al. (US 2018/0286126 A1). Regarding claim 6, the limitations included from claim 2 are rejected based on the same rationale as claim 2 set forth above and incorporated herein. Further regarding claim 6, Burns further teaches: wherein the at least one processor is further configured to: based on approach of the virtual object of the first shape from the first area to around the second area through the second user input, replace the virtual object of the first shape with the virtual object of the second shape, (Burns, Fig. 3 and ¶53: if a user positions the virtual element near a horizontal surface of table 75 in CGR environment 15, the virtual element may be displayed in a horizontal surface modality 90; ¶¶42-43). Burn does not explicitly disclose based on the second user input being completed, fix the virtual object of the second shape in the second area. This limitation is essentially claiming anchoring of an object upon the completion of a user input, which is well-known in the field of augmented reality. Schwarz discloses: based on the second user input being completed, fix the virtual object of the second shape in the second area (Schwarz, Figs. 5-8 and ¶60: With reference to FIG. 8, in other examples when the user interface element layout program 12 determines that the motorcycle 244 and/or UI elements 76 are within the predetermined distance 500 of table top 236, both the UI elements 76 and the motorcycle 244 are transitioned to display on the table top; ¶61: when the program determines that one or more of a virtual object and user interface element(s) are within the predetermined distance, the program may display the one or more user interface elements on the surface when the user provides a release input) Both Burns and Schwarz are directed user interfaces for user manipulation of virtual objects within augmented reality systems. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the augmented reality user interface for relocating and changing virtual objects as provided by Burns, with the technique of identifying a first area meeting a condition for a virtual object as provided by Mene, with the technique of aligning and placing a virtual object upon user release as provided by Schwarz, using known electronic interfacing and programming techniques. The modification results in an improved user interface for augmented reality by allowing the attachment of a virtual object to a location in response to a user release to allow an object to stay at a user designated location, instead of frustratingly continuing to move despite user’s preference, and further better aligning objects within the user space. Regarding claim 16, the device of claim 6 performs the method of claim 16 and as such claim 16 is rejected based on the same rationale as claim 6 set forth above. Regarding claim 7, the limitations included from claim 2 are rejected based on the same rationale as claim 2 set forth above and incorporated herein. Further regarding claim 7, Burns does not explicitly disclose based on approach of the virtual object of the first shape from the first area to around the second area through the second user input, display a virtual object indicating that a virtual object is fixable in the second area. This limitation is essentially claiming anchoring of an object upon the completion of a user input, which is well-known in the field of augmented reality. Schwarz discloses: wherein the at least one processor is further configured to: based on approach of the virtual object of the first shape from the first area to around the second area through the second user input, display a virtual object indicating that a virtual object is fixable in the second area (Schwarz, Figs. 9-10 and ¶68: With reference now to FIGS. 9 and 10, in some examples when the user interface element layout program 12 determines that one or more of the virtual object and the one or more UI elements are within the predetermined distance of a physical surface, the program may display one or more of (1) a visual indication with the one or more user interface elements and (2) a visual indication on the physical surface; ¶69: On making this determination, the user interface element layout program 12 may display a visual indication with the user interface elements 76, such as a highlighted border 604 around the elements. In other examples, any other visual indication may be provided, such as adding color to or changing a color of the UI elements 76. In this manner, visual feedback is provided to the user to alert the user that the UI elements 76 may be transitioned to the table top 236). Both Burns and Schwarz are directed user interfaces for user manipulation of virtual objects within augmented reality systems. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the augmented reality user interface for relocating and changing virtual objects as provided by Burns, with the technique of identifying a first area meeting a condition for a virtual object as provided by Mene, with the technique of providing indicators to inform user that virtual object can be placed on an object as provided by Schwarz, using known electronic interfacing and programming techniques. The modification results in an improved augmented reality user interface for relocating virtual objects within a physical space by providing better indicators to user as to functionality and to better assist a user with understanding where virtual objects can be placed for easier usability. Regarding claim 17, the device of claim 7 performs the method of claim 17 and as such claim 17 is rejected based on the same rationale as claim 7 set forth above. Claim(s) 8-9 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Burns et al. (US 2020/0082632 A1) in view of Mene et al. (US 2023/0169695 A1) and in further view of Cazamias et al (US 2023/0343027 A1). Regarding claim 8, the limitations included from claim 1 are rejected based on the same rationale as claim 1 set forth above and incorporated herein. Further regarding claim 8, Cazamias discloses: wherein the at least one processor is further configured to: based on a virtual object related to a second application, that is different from a first application corresponding to the virtual object of the first shape, approaching the virtual object of the first shape in a state that the virtual object of the first shape is displayed, automatically align the virtual object corresponding to the second application in an area around the virtual object of the first shape. (Cazamias, ¶24: the user 104 may use gestures to move the virtual object 110a from a first position to a second position, as indicated by the solid arrow in FIG. 1C, e.g. the displayed movement is based on the first gesture 112 and the displayed movement may follow a direction of the first gesture 112, where the electronic device 102 detects a movement of the virtual object 110a within a threshold distance of another virtual object; Fig. 1E and ¶26: display grouped objects along a line; ¶39: when the object placement determiner 330 determines that the first virtual object has moved within the threshold distance of the second virtual object, the object placement determiner 330 associates the first virtual object and the second virtual object, e.g., creates a group comprising the first virtual object and the second virtual object; ¶40: a display module 340 causes the display 302 to display virtual objects (e.g., the first virtual object and the second virtual object) at the object placement locations determined by the object placement determiner 330; ¶50: movement of grouped objects toward a point between the two objects – i.e. automatically align; ¶57 discusses drop zone) Both Burns and Cazamias are directed user interfaces for user manipulation of virtual objects within augmented reality systems. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the augmented reality user interface for relocating and changing virtual objects as provided by Burns, with the technique of identifying a first area meeting a condition for a virtual object as provided by Mene, with the technique of associating virtual objects together within the augmented interface for coordinated interaction and display as provide by Schwarz, using known electronic interfacing and programming techniques. The modification results in an improved user interface by allowing a user to more easily manipulate movement and placement of multiple objects in augmented reality for easier usability. Regarding claim 9, Burns modified by Mene and Cazamias further discloses: wherein the at least one processor is further configured to display a virtual object indicating that the virtual object of the first shape and the aligned virtual object corresponding to the second application are grouped. (Cazamias, ¶24: the user 104 may use gestures to move the virtual object 110a from a first position to a second position, as indicated by the solid arrow in FIG. 1C, e.g. the displayed movement is based on the first gesture 112 and the displayed movement may follow a direction of the first gesture 112, where the electronic device 102 detects a movement of the virtual object 110a within a threshold distance of another virtual object; Fig. 1E and ¶26: display grouped objects along a line; ¶39: when the object placement determiner 330 determines that the first virtual object has moved within the threshold distance of the second virtual object, the object placement determiner 330 associates the first virtual object and the second virtual object, e.g., creates a group comprising the first virtual object and the second virtual object; ¶40: a display module 340 causes the display 302 to display virtual objects (e.g., the first virtual object and the second virtual object) at the object placement locations determined by the object placement determiner 330; ¶50: movement of grouped objects toward a point between the two objects – i.e. automatically align; ¶57 discusses drop zone) Both Burns and Cazamias are directed user interfaces for user manipulation of virtual objects within augmented reality systems. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the augmented reality user interface for relocating and changing virtual objects as provided by Burns, with the technique of identifying a first area meeting a condition for a virtual object as provided by Mene, with the technique of associating virtual objects together within the augmented interface for coordinated interaction and display as provide by Schwarz, using known electronic interfacing and programming techniques. The modification results in an improved user interface by allowing a user to more easily manipulate movement and placement of multiple objects in augmented reality for easier usability. Regarding claim 18, the device of claim 9 performs the method of claim 18 and as such claim 18 is rejected based on the same rationale as claim 9 set forth above. Claim(s) 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Burns et al. (US 2020/0082632 A1) in view of Mene et al. (US 2023/0169695 A1) and in further view of Khoe et al. (US 2013/0050263 A1). Regarding claim 22, the limitations included from claim 1 are rejected based on the same rationale as claim 1 set forth above. Further regarding claim 22, Burns further discloses: Wherein the at least one function includes a first plurality of functions related to the virtual object, and the virtual object of the first shape includes a first plurality of icons corresponding to the first plurality of functions, (Burns, Fig. 2 and ¶47: wearable device for displaying objects; Fig. 3 and ¶¶49-52 discloses a display of a virtual element in different modalities based on positioning, placing along the wall 30 proximate to the floor, the modularity results in the virtual element changing from analog clock to analog and digital or visa-versa; ¶41: weather object in different modalities – Fig. 1 – which shows the position on the wall indicating both sunny, i.e. functionality of environment conditions, and temperature functionality, and when moved to other location, the shape changes to e.g. 50) Wherein the instructions, when executed by the at least one processor individually or collectively, cause the wearable electronic device to: based on receiving a second user input to move the virtual object of the first shape to a second area that is different from the first area, replace the virtual object of the first shape with a virtual object of a second shape that is different from the first shape, wherein the virtual object of the second shape includes a second plurality of icons (Burns, Fig. 2 and ¶47: wearable device for displaying objects; Fig. 3 and ¶¶49-52 discloses a display of a virtual element in different modalities based on positioning, placing along the wall 30 proximate to the floor, the modularity results in the virtual element changing from analog clock to analog and digital or visa-versa; ¶41: weather object in different modalities, including from 2D to 3D shapes) The only limitation that is arguably not explicitly taught by Burns is wherein the virtual object of the second shape includes a second plurality of icons corresponding to a second plurality of functions at least one of which is different from the first plurality of functions, and wherein the second plurality of functions includes the first plurality of functions. In other words, an expansion of the displayed functionalities of a user interface element based on user input for a change of display of the user interface element. Examiner notes that Burns does teach changing the display of an analog clock element, which includes a plurality of icons representing different measurements of time, i.e. minutes vs. hours, which when moved is expanded to show the same analog clock icons for minutes and time, but also displaying the digital time function (see e.g. Fig. 3). For sake of clarity and compact prosecution, however, additional prior art is relied upon to show that expansion of user interface elements wherein the virtual object of the second shape includes a second plurality of icons corresponding to a second plurality of functions at least one of which is different from the first plurality of functions, and wherein the second plurality of functions includes the first plurality of functions would have been known and obvious to one of ordinary skill in the art. Khoe discloses: based on receiving a second user input to move the virtual object, replace the virtual object of the first shape with a virtual object of a second shape that is different from the first shape, wherein the virtual object of the second shape includes a second plurality of icons corresponding to a second plurality of functions at least one of which is different from the first plurality of functions, and wherein the second plurality of functions includes the first plurality of functions (Khoe, Figs. 5S to 5T and ¶¶194-195: touch gesture detected on rotation user interface object of popup view 510, where in response to detecting the touch gesture on rotation user interface object 522, the portrait popup view 510 rotates so that it is displayed as landscape popup view, where by rotating second electronic device 100-2 from the portrait orientation to the landscape orientation, the displayed calculator application view changes from the simple calculator application view 512 (FIG. 5S) to the scientific calculator application view 526 (FIG. 5T) on second electronic device 100-2) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the augmented reality user interface for relocating and changing virtual objects as provided by Burns, with the technique of identifying a first area meeting a condition for a virtual object as provided by Mene, with the technique of expanding functionality of a user interface object based on a user input to change the viewing of the object as provided by Khoe, using known electronic interfacing and programming techniques. The modification merely applies a known technique of expanding user interface objects or widgets based on user input to an existing device that changes the viewing of a user interface object based on user input, to yield predictable results of providing a common expansion of functionality to graphical user interface elements based on user input. The expansion of user interface widgets to provide greater information and functionality to a user based on input is applicable to the base device that provides different functionality and views of graphical user interface elements or widgets in augmented reality basd on user input. The modification merely combines a known software modification to user interface elements within a graphical user interface design, and further allows for easier access to additional functionality that provides greater usability to a user without requiring complicated or time wasting input by the user themselves. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM A BEUTEL whose telephone number is (571)272-3132. The examiner can normally be reached Monday-Friday 9:00 AM - 5:00 PM (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DANIEL HAJNIK can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WILLIAM A BEUTEL/Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Aug 02, 2023
Application Filed
Jul 31, 2025
Non-Final Rejection — §103
Oct 06, 2025
Examiner Interview Summary
Oct 06, 2025
Applicant Interview (Telephonic)
Nov 05, 2025
Response Filed
Nov 21, 2025
Final Rejection — §103
Jan 26, 2026
Request for Continued Examination
Jan 30, 2026
Response after Non-Final Action
Feb 09, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12581262
AUGMENTED REALITY INTERACTION METHOD AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12572258
APPARATUS AND METHOD WITH IMAGE PROCESSING USER INTERFACE
2y 5m to grant Granted Mar 10, 2026
Patent 12566531
CONFIGURING A 3D MODEL WITHIN A VIRTUAL CONFERENCING SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12561927
MEDIA RESOURCE DISPLAY METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12554384
SYSTEMS AND METHODS FOR IMPROVED CONTENT EDITING AT A COMPUTING DEVICE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
90%
With Interview (+20.4%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 469 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month