DETAILED ACTION
This action is in reply to Applicant’s Amendments and Remarks filed on 6/18/2025.
Claims 1-20 are pending. Claims 1-6, 8-12, 14-17, 19, and 20 have been amended. Claims 1 and 20 are independent claims.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS)’s submitted on 12/19/2024, 2/24/2025, 6/18/2025, and 7/23/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7, 11-13, 16, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Rodriguez, II, US PGPUB 2019/0146219A1 (hereinafter as Rodriguez) in view of KIES et al., US PGPUB 2014/0282272Al (hereinafter as KIES I).
Regarding independent claim 1, Rodriguez teaches a method [see e.g. [0003] and fig. 3a] comprising:
determining, using control circuitry [note e.g. in [0031] and [0033] the processing components including a central processing unit and memories], a position of a user device in a field of view of a user in an XR environment [note in step 304 of fig. 3a determining the position of a wristwatch by an augmented reality eyewear device, as described in [0057]; see also the example in [0076] indicating bringing the watch face of a wristwatch into the center of the user’s field of view];
generating, for display using the control circuitry [again, note e.g. in [0031] and [0033] the processing components including a central processing unit and memories]: an executable application running on a physical user interface of the user device, and one or more virtual display elements in the XR environment relative to the position of the user device in the field of view, wherein each virtual display element provides access to another executable application [note in steps 305 and 306 of fig. 3a (described in [0057]) the rendering of an interface by the augmented reality eyewear device showing different application icons displayed in a horizontal line with one application above the watch face, i.e. the display is relative to the position of the device as shown in fig. 3 and also described in [0055], especially lines 7-13; also note in the example described in [0076] and shown in fig. 9, the display of interface items belonging to different applications; note e.g. from [0100] the embodiments where the smartwatch runs the full version of the application displayed];
receiving, at the control circuitry, a command via the respective virtual display element providing access to the respective executable application of the user device [note in the last 5 lines of [0055] receiving a selection of a displayed interface element to run a corresponding application; again note from [0100] the embodiments where the smartwatch runs the full version of the application displayed, i.e. the application is being executed by the wristwatch device; note also the application selection interface in lines 2-3 of [0057]]; and
transmitting, using the control circuitry [[again, note e.g. in [0031] and [0033] the processing components including a central processing unit and memories], an instruction to the user device to execute the command [see the portions cited for the previous limitation; also note the data transmission capabilities in [0037]].
Rodriguez does not explicitly teach that the initially displayed application is running in a foreground of the user device nor that each virtual display element of the one or more virtual display elements in the XR environment provides access to an executable application running in a background of the user device without navigation away from the executable application running in the foreground on the physical user interface of the user device.
Rodriguez also does not explicitly teach that receiving the command provides access to the respective executable application running in the background of the user device while simultaneously generating for display the executable application running in the foreground on the physical user interface of the user device.
KIES I teaches an initially displayed executable application running in a foreground of a user device and one or more executable applications running in a background of the user device, wherein a received command provides access to a corresponding executable application running in the background of the user device while simultaneously generating for display the executable application running in the foreground on the physical user interface of the user device [note the steps of fig. 1 and the related text indicating applying an received input command to a certain background application while simultaneously keeping the active executable foreground application displayed; see also fig. 6 and related text].
It would have been obvious to one of ordinary skill in the art having the teachings of Rodriguez and KIES I before the effective filing date of the claimed invention to modify the executable applications running on the user device and accessible via received command via respective virtual display elements in the framework taught by Rodriguez by explicitly specifying a foreground application and one or more background applications wherein the received command providing access to a certain background executable application of the user device provides access to the respective executable application running in the background of the user device while simultaneously generating for display the executable application running in the foreground on the physical user interface of the user device, as per the teachings of KIES I. The motivation for this obvious combination of teachings would be to improve multitasking on a user device by allowing access to background application without affecting a focused task or foreground application, as suggested by KIES I [see e.g. [0002]-[0005]], which would allow a more efficient and user-friendly display.
Regarding claim 2, the rejection of independent claim 1 is incorporated. Rodriguez further teaches that the executable applications are executable by the control circuitry of the user device [again, note e.g. from [0100] the embodiments where the smartwatch runs the full version of the application displayed]. KIES I further teaches application running in a foreground and executable applications running in the background [see portions cited in the rejection of claim 1]. See the rejection of claim 1 for motivations to combine the cited art.
Regarding claim 3, the rejection of independent claim 1 is incorporated. Rodriguez further teaches executable applications that are executable by the control circuitry of a server operating in communication with the user device [again, note e.g. from [0101] the exemplary applications of web browsers and maps which is inherently run by a server communicating with the smartwatch device]. KIES I further teaches application running in a foreground and executable applications running in the background [see portions cited in the rejection of claim 1]. See the rejection of claim 1 for motivations to combine the cited art.
Regarding claim 4, the rejection of independent claim 1 is incorporated. Rodriguez further teaches that the one or more virtual display elements are generated for display using control circuitry of an XR device operating in communication with the user device [again, note in steps 305 and 306 of fig. 3a (described in [0057]) that the rendering of the interface is by the augmented reality eyewear device; note the communication between the augmented reality eyewear and the smartwatch e.g. in claim 16].
Regarding claim 5, the rejection of independent claim 1 is incorporated. Rodriguez further teaches monitoring the position of the user device; and updating the position of the one or more virtual display elements in the XR environment as the position of the user device changes [note steps 307 and 308 of fig. 3a described in [0057] that indicate continuously determining the position of the wristwatch and adjusting the displayed interface accordingly].
Regarding claim 6, the rejection of independent claim 1 is incorporated. Rodriguez further teaches determining an anchor point of the user device; and generating the one or more virtual display elements relative to the anchor point [note from fig. 3 and the description in [0057] that the display elements are generated in a horizontal line with one element above the watch face, i.e. the watch face center is used as an anchor point for the wristwatch; see also the example shown in fig. 9 (which is described in [0076]) where the display elements are surrounding an anchor point that also coincides with the center of the watch face].
Regarding claim 7, the rejection of claim 6 is incorporated. Rodriguez further teaches that the anchor point is determined by control circuitry of an XR device operating in communication with the user device [note e.g. from [0057] describing steps 304, 307 and 308 that the augmented reality eyewear device determines the position and orientation of the wristwatch and utilizes the calibration data or IMU data for this determination and for adjusting the display so as to be aligned surrounding the watch face; again note the communication between the augmented reality eyewear and the smartwatch e.g. in claim 16].
Regarding claim 11, the rejection of independent claim 1 is incorporated. Rodriguez further teaches determining whether the user device is in a predetermined region of the field of view of the user in the XR environment; and generating for display the one or more virtual display elements when the user device is within the predetermined region [note again, the example described in [0076] and shown in fig. 9 where the gesture bringing the watch face into the center of the user’s field of view is detected and the display is accordingly generated].
Regarding claim 12, the rejection of claim 11 is incorporated. Rodriguez further teaches transitioning the one or more virtual display elements between a first display state and a second display state as the user device moves into the predetermined region [note again, the example described in [0076] where the gesture bringing the watch face into the center of the user’s field of view is detected and the display is accordingly activated ( a second display state); note that if the wristwatch is removed from the field of view (i.e. is not in the predetermined region), the display is deactivated (i.e. is in a display state of being not shown, which is in line with a first display state as supported by Applicant’s specification in [0074] and as shown in fig. 9 of Applicant’s drawings)].
Regarding claim 13, the rejection of claim 12 is incorporated. Rodriguez further teaches that the transition between display states is based on a type of user input [note again, the example described in [0076] and note the user input being a gesture bringing the watch face into the center of the user’s field of view, thus resulting in a transition between the deactivated state to the activated state of the interface].
Regarding claim 16, the rejection of independent claim 1 is incorporated. Rodriguez further teaches controlling the user interface provided by one of the one or more virtual display elements by virtue of user interaction with the user device [see e.g. [0073] describing scrolling the display elements (which controls at least one user interface provided by one of the display elements) by virtue of a user interacting with the bezel of the wristwatch 102].
Regarding claim 19, the rejection of independent claim 1 is incorporated. Rodriguez further teaches that the generation of the one or more virtual display elements is predetermined based on a setting of the user device [again, note from fig. 3 and the description in [0057] that the display elements are generated in a horizontal line with one element above the watch face, i.e. in a linear-format layout to the left and right of the watch face center of the wristwatch (which is an exemplary predetermined setting); see also the example shown in fig. 9 where the display elements are surrounding the center of the watch face, i.e. in a circular/radial-format layout around the watch face].
Regarding independent claim 20, Rodriguez also teaches a system [see e.g. [0003] and fig. 1] comprising;
memory configured to store an XR environment [note e.g. from [0026] the storage of data represented by physical quantities within the system’s memories; see also from [0032]-[0033] the collection of data through sensing the environment]; and
control circuitry [note e.g. in [0031] the processing components including a central processing unit] configured to:
determine a position of a user device in a field of view of a user in the XR environment [note in step 304 of fig. 3a determining the position of a wristwatch by an augmented reality eyewear device, as described in [0057]; see also the example in [0076] indicating bringing the watch face of a wristwatch into the center of the user’s field of view];
generate, for display: an executable application running on a physical user interface of the user device, and one or more virtual display elements in the XR environment relative to the position of the user device in the field of view, wherein each virtual display element provides access to another executable application [note in steps 305 and 306 of fig. 3a (described in [0057]) the rendering of an interface by the augmented reality eyewear device showing different application icons displayed in a horizontal line with one application above the watch face, i.e. the display is relative to the position of the device as shown in fig. 3 and also described in [0055], especially lines 7-13; also note in the example described in [0076] and shown in fig. 9, the display of interface items belonging to different applications; note e.g. from [0100] the embodiments where the smartwatch runs the full version of the application displayed];
receive a command via the respective virtual display element providing access to the respective executable application of the user device [note in the last 5 lines of [0055] receiving a selection of a displayed interface element to run a corresponding application; again note from [0100] the embodiments where the smartwatch runs the full version of the application displayed, i.e. the application is being executed by the wristwatch device; note also the application selection interface in lines 2-3 of [0057]]; and
transmit an instruction to the user device to execute the command [see the portions cited for the previous limitation; also note the data transmission capabilities in [0037]].
Rodriguez does not explicitly teach that the initially displayed application is running in a foreground of the user device nor that each virtual display element of the one or more virtual display elements in the XR environment provides access to an executable application running in a background of the user device without navigation away from the executable application running in the foreground on the physical user interface of the user device.
Rodriguez also does not explicitly teach that receiving the command provides access to the respective executable application running in the background of the user device while simultaneously generating for display the executable application running in the foreground on the physical user interface of the user device.
KIES I teaches an initially displayed executable application running in a foreground of a user device and one or more executable applications running in a background of the user device, wherein a received command provides access to a corresponding executable application running in the background of the user device while simultaneously generating for display the executable application running in the foreground on the physical user interface of the user device [note the steps of fig. 1 and the related text indicating applying an received input command to a certain background application while simultaneously keeping the active executable foreground application displayed; see also fig. 6 and related text].
It would have been obvious to one of ordinary skill in the art having the teachings of Rodriguez and KIES I before the effective filing date of the claimed invention to modify the executable applications running on the user device and accessible via received command via respective virtual display elements in the framework taught by Rodriguez by explicitly specifying a foreground application and one or more background applications wherein the received command providing access to a certain background executable application of the user device provides access to the respective executable application running in the background of the user device while simultaneously generating for display the executable application running in the foreground on the physical user interface of the user device, as per the teachings of KIES I. The motivation for this obvious combination of teachings would be to improve multitasking on a user device by allowing access to background application without affecting a focused task or foreground application, as suggested by KIES I [see e.g. [0002]-[0005]], which would allow a more efficient and user-friendly display.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Rodriguez in view of KIES I, as applied to claim 1 above, and further in view of KIES et al., US PGPUB 2022/0100265 Al (hereinafter as KIES II).
Regarding claim 8, the rejection of independent claim 1 is incorporated. Rodriguez further teaches that the position of the one or more virtual display elements in the XR environment relative to the position of the user device is defined by a predetermined layout [again, note from fig. 3 and the description in [0057] that the display elements are generated in a horizontal line with one element above the watch face, i.e. in a linear-format layout to the left and right of the watch face center of the wristwatch; see also the example shown in fig. 9 where the display elements are surrounding the center of the watch face, i.e. in a circular/radial-format layout around the watch face].
The previously combined art, however, does not explicitly teach that the predetermined layout is corresponding to a type of user input, the method comprising: determining the type of user input; and generating the one or virtual display elements in the predetermined layout.
KIES II teaches a method [see e.g. [0014]] comprising displaying one or more display elements in an XR environment in positions relative to the position of an object, wherein the position of the one or more display elements in the XR environment relative to the position of the object is defined by a predetermined layout corresponding to a type of user input, the method comprising: determining the type of user input; and generating the one or display elements in the predetermined layout [see e.g. [0107] and compare figs. 4 and 5; note e.g. placement of virtual content in a certain interface layout that corresponds to a type of input (the user holding the object with the left hand or the right hand); note determining the type of user input and generating the display in the corresponding predetermined layout accordingly; see again [0014]].
It would have been obvious to one of ordinary skill in the art having the teachings of Rodriguez and KIES II before the effective filing date of the claimed invention to modify the predetermined layout of the one or more display elements taught by Rodriguez by explicitly specifying that it corresponds to a type of user input, as per the teachings of KIES II. The motivation for this obvious combination of teachings would be to provide the ability for the XR system to dynamically configure a user interface based on attributes of a user interaction, as suggested by KIES [see e.g. [0059] and [0091]], which would allow a more user-friendly display.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Rodriguez in view of KIES I, as applied to claim 1 above, and further in view of Singh et al., US Patent 11,385,775 B2 (hereinafter as Singh).
Regarding claim 9, the rejection of independent claim 1 is incorporated. Rodriguez further teaches that the position of the one or more virtual display elements in the XR environment relative to the position of the user device is defined by a predetermined layout [again, note from fig. 3 and the description in [0057] that the display elements are generated in a horizontal line with one element above the watch face, i.e. in a linear-format layout to the left and right of the watch face center of the wristwatch; see also the example shown in fig. 9 where the display elements are surrounding the center of the watch face, i.e. in a circular/radial-format layout around the watch face].
KIES I further teaches the applications running in the background [see portions cited in the rejection of claim 1].
The previously combined art, however, does not explicitly teach that the predetermined layout is based on user activity relating to each executable application.
Singh teaches a method of displaying one or more display elements in positions that are defined by a predetermined layout based on user activity relating to executable applications that the display elements correspond to [see e.g. col. 1, line 55 - col. 2, line 6 describing a layout for application windows that is based on user activity; see also examples in col. 2, lines 41-48].
It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and Singh before the effective filing date of the claimed invention to apply Singh’s teachings regarding utilizing user activity relating to executable applications for determining a layout to the positioning of the one or more virtual display elements in the XR environment taught by Rodriguez and modified by KIES I to execute in the background. The motivation for this obvious combination of teachings would be to allow user patterns to be identified and utilized to automatically position UI objects in a way that saves time and effort, as suggested by Singh [see e.g. col. 21, lines 19-34].
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Rodriguez in view of KIES I, as applied to claim 1 above, and further in view of KHAN et al., US PGPUB 2022/0326967 Al (hereinafter as KHAN).
Regarding claim 10, the rejection of independent claim 1 is incorporated.
KIES I further teaches the applications running in the background [see portions cited in the rejection of claim 1].
The previously combined art does not explicitly teach receiving an input to switch between usage of the executable applications; and generating for display the one or more virtual display elements in response to receiving the command.
KHAN teaches receiving a command to switch between usage of executable applications; and generating for display one or more display elements in response to receiving the command [see e.g. [0073] indicating generating layout information including one or more display elements responsive to receiving information from a focus switching module; note from [0118] that the switch of focus can be between two different software applications; see also [0039] indicating a dynamic change of displayed elements after receiving a user indication to switch focus to another application; see also [0010]].
It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and KHAN before the effective filing date of the claimed invention to modify Rodriguez/KIES I’s method of displaying elements in an XR environment corresponding to applications running in the background by explicitly specifying receiving a command to switch between usage of executable applications; and generating for display the one or more virtual display elements in response to receiving the command, as per the teachings of KHAN. The motivation for this obvious combination of teachings would be to enable efficient use of the layout by reducing clutter and reusing display locations, as suggested by KHAN [see again e.g. [0039]] which would enhance the user’s experience.
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Rodriguez in view of KIES I, as applied to claim 12 above, and further in view of KLEIN et al., US PGPUB 2023/0137920 Al (hereinafter as KLEIN).
Regarding claim 14, the rejection of claim 12 is incorporated.
Rodriguez further teaches defining an anchor point of the user device and generating the one or more virtual display elements relative to the anchor point of the user device as the user device moves into the predetermined region [note again, note the example shown in fig. 9 (which is described in [0076]) where the display elements are generated surrounding an anchor point that coincides with the center of the watch face; see also from fig. 3 and the description in [0057] that the display elements are generated in a horizontal line with one element above the watch face, i.e. the watch face center is used as an anchor point for the wristwatch; note also the anchoring of the interface to the wristwatch as per [0055] and [0075]; see also from [0057] describing steps 304, 307 and 308 that the augmented reality eyewear device determines the position and orientation of the wristwatch and utilizes the calibration data or IMU data for this determination and for adjusting the display so as to be aligned surrounding the watch face].
The previously combined art does not explicitly teach defining an anchor point of the XR environment. Neither does it teach transitioning the one or more virtual display elements from the anchor point of the XR environment towards the anchor point of a the user device as the user device moves into the predetermined region.
KLEIN teaches:
defining an anchor point of an XR environment [see e.g. [0006] describing docking a control object e.g. to a hand of a user in an AR environment; see also [0077] describing step 2514 of fig. 25Aand note the example of a menu (which comprises one or more display elements) as a control object; note also in [0078] that the control object continues to be displayed in a fixed location within the AR environment]; and
transitioning one or more display elements from the anchor point of the XR environment towards an anchor point of a newly displayed object [see e.g. in [0081] the activation of a display of a persistence object within the AR environment and the docking of the control object (which comprises one or more display elements) to the persistence object; see also fig. 25A, step 2532 with a True outcome and fig. 25B, steps 2534 and 2536; note in the last 6 lines of [0067] the transitioning of positioning of control object between docking positions; note that the persistence object may be a virtual display, as per [0058] and as shown in fig. 12; see also [0059]].
It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and KLEIN before the effective filing date of the claimed invention to modify Rodriguez/KIES I’s method of adaptively displaying elements in an XR environment by applying KLEIN’s teaching of transitioning the display of a control object from an initially determined anchor point of the environment towards another anchor point of a newly displayed object in the environment to Rodriguez’ newly displayed user device object as it moves into the predetermined region (to be in the center of the field of view of the XR environment). The motivation for this obvious combination of teachings would be to improve the operations of AR computing devices by intelligently docking and re-orienting control objects which would provide an enhanced user experience, as suggested by KLEIN [see again e.g. [0038]].
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Rodriguez in view of KIES I, as applied to claim 1 above, and further in view of Canberk et al., US PGPUB 2020/0035003 Al (hereinafter as Canberk).
Regarding claim 15, the rejection of independent claim 1 is incorporated.
The previously combined art does not explicitly teach determining a level of user interaction with a first virtual display element; and modifying the position or appearance of the first virtual display element in response to the level of user interaction being above a threshold level.
Canberk teaches determining a level of user interaction with a display element (in an XR environment); and modifying the position or appearance of the display element in response to the level of user interaction being above a threshold level [see e.g. [0014], especially the last 8 lines describing the change in size on an AR object responsive to detecting that an interaction time exceeds a certain threshold interaction time].
It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and Canberk before the effective filing date of the claimed invention to apply Canberk’s teachings regarding modifying the appearance of an element displayed in an AR environment in response to a lever of user interaction determined to be above a threshold level to the first display element in the XR environment taught by Rodriguez. The motivation for this obvious combination of teachings would be to allow user patterns to be identified and utilized to enable visualizing the attractiveness or availability of the element for further user interaction, as suggested by Canberk [see again e.g. [0014], especially the last 3 lines].
Claims 17 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Rodriguez in view of KIES I, as applied to claim 1 above, and further in view of KIEMELE et al., US PGPUB 2019/0065026 Al (hereinafter as KIEMELE).
Regarding claim 17, the rejection of independent claim 1 is incorporated. Rodriguez further teaches a user device that is a physical user device including a physical display screen [see the smartwatch indicated in [0057]], wherein the XR environment is an AR or MR environment provided using an AR or MR device [see e.g. [0003] indicating an augmented reality eyewear], and wherein the respective executable application is running on the physical user device operating in communication with the AR or MR device [again, note e.g. from [0100] the embodiments where the smartwatch runs the full version of the application displayed; again note the communication between the augmented reality eyewear and the smartwatch e.g. in claim 16].
KIES I further teaches the applications running in the background [see portions cited in the rejection of claim 1].
The previously combined art, however, does not explicitly teach rendering, using the AR or MR device, a display screen of the physical user device in the AR or MR environment; and positioning, in the AR or MR environment, the display screen, rendered using the AR or MR device, relative to the physical display screen of the physical user device.
KIEMELE teaches an AR or MR environment [note the AR mode described in lines 7-12 of [0045]] where displaying comprises: rendering, using an AR or MR device [see HMD device 104 shown in fig. 1], a display screen of a physical user device in the AR or MR environment; and positioning, in the AR or MR environment, the display screen, rendered using the AR or MR device, relative to the physical display screen of the physical user device [see e.g. 302 in fig. 3 which is a rendering (by the HMD 104) of the device 110 including a rendering of the display screen 113 in a 3D location corresponding to the real-world location of the display screen of the physical user device, as described in [0022]; see also [0016]].
It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and KIEMELE before the effective filing date of the claimed invention to apply KIEMELE’s teachings regarding, using the AR or MR device, a display screen of the physical user device in the AR or MR environment; and positioning, in the AR or MR environment, the display screen, rendered using the AR or MR device, relative to the physical display screen of the physical user device to the AR environment taught by Rodriguez. The motivation for this obvious combination of teachings would be to enable mimicking the real-world appearance and positioning of the device, as well as interacting with it, via its representation in the AR environment, as suggested by KIEMELE [again, see [0016]].
Regarding claim 18, the rejection of independent claim 1 is incorporated. Rodriguez further teaches a user device that is a physical user device [again, see the smartwatch indicated in [0057]].
The previously combined art, however, does not explicitly teach, that the XR environment is a VR environment. Neither does it teach rendering, using a VR device, a virtual twin of the physical user device; and positioning, in the VR environment, the virtual twin based on a determined position of the physical user device.
KIEMELE teaches an XR environment that is a VR environment [note the VR mode described in lines 1-7 of [0045]] comprising: rendering, using a VR device [see HMD device 104 shown in fig. 1], a virtual twin of a physical user device; and positioning, in the VR environment, the virtual twin based on a determined position of the physical user device [see e.g. 302 in fig. 3 which is a rendering (by the HMD 104) of the device 110 in a 3D location corresponding to the real-world location of the physical user device, as described in [0022]; see also fig. 1 and [0016]].
It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and KIEMELE before the effective filing date of the claimed invention to modify the AR environment taught by Rodriguez by specifying the rendering of a virtual twin of the physical user device; and positioning, the virtual twin based on a determined position of the physical user device, as per the teachings of KIEMELE in the pure VR mode of operation. The motivation for this obvious combination of teachings would be to enable both mimicking the real-world appearance and positioning of the device, as well as interacting with it, via its representation in the XR environment, as suggested by KIEMELE [again, see [0016]].
Response to Arguments
Applicant’s amendment to address the previously presented informalities has been fully considered and is persuasive. The claim objections previously presented have been accordingly withdrawn.
Applicant’s amendments to the claims in regards to the previously presented rejections under 35 U.S.C. 112(b) have been fully considered and are persuasive. The claim rejections previously presented under 35 U.S.C. 112(b) have been accordingly withdrawn.
Applicant’s arguments with respect to the amended independent claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Examiner notes the following cited prior art reference:
US-10289290-B2, Gil et al., which teaches displaying a portion of background applications concurrently with the display of a foreground application [see e.g. abstract and front figure].
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARIA S AYAD whose telephone number is (571)272-2743. The examiner can normally be reached Monday-Friday, 7:30 am - 4:30 pm. Alt, Friday, EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571) 272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARIA S AYAD/Primary Examiner, Art Unit 2172