DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to the claims have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 6-7, 11-14, 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Furtwangler et al., US 2019/0340833 A1 (hereinafter “Furtwangler”) in view of Poulos et al., US 2014/0333666 A1 (hereinafter “Poulos”).
Regarding claim 1, Furtwangler discloses a head mounted display (FIG. 1, virtual reality headset 132 at [0035]) comprising:
a display (FIG. 1. Virtual reality headset 132 of client system for display to a user on a display device at [0045]);
a sensor (FIG. 1, sensor(s) 142 and [0035]) configured to specify a direction in which the head mounted display is facing in a real space (FIG. 1, 142 and [0035] sensors listed for tracking the location of the headset device 132; head camera for tracking headset 132 and determining the position in space and [0045] overlaying virtual images on the real world based on camera on headset 132);
a receiver (FIG. 1, virtual reality input devices 134 including sensors 144 at [0035]) configured to receive an operation input (FIGS. 1-2B , [0005], [0012]-[0013], [0035] and [0046]-[0047] e.g., click a button, line of sight detected based on eye tracking and direction of device 132, and/or pointing of an input device 134) by a user (101) of the head mounted display (FIGS. 1-3C, virtual reality input devices 134 including sensors 144 at [0035] and [0046]-[0049] user may wear the headset 132 and use the input devices to interact with virtual reality environment 136 generated by headset 132); and
a processing circuit (FIG. 14, processor 1402, [0084]-[0088]), wherein the processing circuit is configured to:
comprise a first display control (controllers at [0005] and [0034]-[0035]) that arranges and displays a virtual object (FIGS. 2A-2B and 210a-h at [0046]-[0047]) at a first position (210h at a first position and broadly, 210 is displayed within panel 202 at [0046]-[0047], first position considered both placement of 202 and specific placement of 210h) with respect to a first direction (FIGS. 2A-2B, user 101 facing forward toward panel 202 at [0046]) of the head mounted display (132) specified by the sensor (142 at [0035] and FIGS. 2A-2B, sensor specifying direction; and [0069], when the user turns his head, the virtual reality device may adjust what is displayed to the user accordingly and render a corresponding portion of the virtual environment that is in that direction – panels 202 not fixed to headset direction, but visible sections rendered in accordance with head and headset movement),
and a second display control (controllers at [0005] and [0034]-[0035] and [0081]) that sets an operation area (FIGS. 2A-2B, 204 and [0047]) at a second position (FIGS. 2A-2B with panel 204 off to the side of panel 202 at [0047]) with respect to a second direction different from the first direction (FIGS. 2A-2B with panel 204 off to the side of panel 202 is a second direction which is differently angled from first direction with relation to the user 101 at [0047]) and arranges and displays the virtual object in the operation area (FIGS. 2A-2B and [0046]-[0047], inputting an input (e.g., clicking a button as a call operation) using the pointer of the input device 134, and moving the app 210h (noting grayed out in FIG. 2B at [0047]), the app 210h therefore, has moved, opened, and is placed in panel 204, without any input specific for moving a position of the virtual object), and
control the display (FIGS. 2A-2B and [0045]-[0047] control the system to render virtual space for display to user) to terminate displaying of the virtual object at the first position, change the arrangement position of the virtual object from the first position, to the operation area, and display, in the operation area, the virtual object that was displayed at the first position, as the second display control (FIGS. 2A-2B and [0046]-[0047], inputting an input (e.g., clicking a button as a call operation) using the pointer of the input device 134, and moving the app 210h (noting grayed out in FIG. 2B at [0047]), the app 210h therefore, has moved, opened, and is placed in panel 204, without any input specific for moving a position of the virtual object), in response to the call operation input received by the receiver, without receiving an input that specifies the arrangement position of the virtual object by the user (FIGS. 2A-2B and [0046]-[0047], inputting an input (e.g., clicking a button as a call operation) using the pointer of the input device 134, and moving the app 210h (noting grayed out in FIG. 2B at [0047]), the app 210h therefore, has moved, opened, and is placed in panel 204, without any input specific for moving a position of the virtual object), and
However, Furtwangler, in the above embodiment, Furtwangler does not explicitly discloses the system can control the display not to change an arrangement position of the virtual object from the first position even if specifying by the sensor that the head mounted display is facing the second direction different from the first direction, when the receiver does not receive a call operation input by the user; and after changing the arrangement position of the virtual object from the first position to the operation area, control the display not to change the arrangement position of the virtual object from the operation area regardless of a direction of the head mounted display, when the receiver does not receive an operation input by the user.
In another embodiment, Furtwangler discloses control the display ([0069] and virtual reality device may adjust what is displayed) not to change an arrangement position of the virtual object from the first position even if specifying by the sensor that the head mounted display is facing the second direction different from the first direction as the first display control, when the receiver does not receive a call operation input by the user ([0069], when the user turns his head, the virtual reality device may adjust what is displayed to the user accordingly and render a corresponding portion of the virtual environment that is in that direction, but does not move any objects rendered in unison with the head movement; noting: as applied to the embodiment of FIGS. 2A-2B disclosed at [0046]-[0047], panel 202 would be rendered as not fixed to headset direction, but visible sections rendered in accordance with head and headset movement; therefore, position of the object/app 210h would remain static in the environment when no input received (i.e., call operation (e.g., click selection operation by user 101 on 134)) even when head position changes).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the embodiment of FIGS. 2A-2B of Furtwangler to incorporate the static environment imaging of FIG. 11 and [0069] as disclosed by Furtwangler because such an integration/combination of elements is clearly contemplated by Furtwangler ([0093] – embodiments may include combination/permutations comprehended by one of ordinary skill) and additionally, the combination would produce the stated goal of providing the user with an intuitive experience – which gives the user a sense of “presence” or a feeling that they are actually in the virtual environment (see Furtwangler at [0014]). Therefore, a person of ordinary skill in the art would have been motivated to combine the embodiments to achieve the claimed invention and there would have been a reasonable expectation of success.
However, Furtwangler does not explicitly disclose after changing the arrangement position of the virtual object from the first position to the operation area, control the display not to change the arrangement position of the virtual object from the operation area regardless of a direction of the head mounted display, when the receiver does not receive an operation input by the user.
In the same field of endeavor, Poulos discloses after changing the arrangement position of the virtual object from the first position to the operation area, control the display not to change the arrangement position of the virtual object from the operation area regardless of a direction of the head mounted display, when the receiver does not receive an operation input by the user (FIG. 7-11, [0041]-[0042] and [0052]-[0056] describing pinned location and pinning feature of an object by user interaction which prevents the movement of the object).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the virtual display methodology of the embodiments of to incorporate the pinning capability for not moving an object as disclosed by Poulos because the references are within the same field of endeavor, namely, virtual environments with objects and display areas. The motivation to combine these references would have been to limit the intrusiveness of the virtual object on a user’s field of view (see Poulos at [0012] and [0037]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Regarding claim 2, Furtwangler in view of Poulos discloses the head mounted display according to claim 1 (see above), further comprising: a memory (Furtwangler at FIG. 1, system content 138, data stores 164 and [0034]-[0041]; FIG. 14, memory 1404 and storage 1406 and data caches at [0084]-[0086]) configured to store an application (Furtwangler at FIG. 1, [0034]-[0035] software, downloaded contents, such as applications and FIG. 14 at [0082]-[0090] software running on system 1400, with processors loading instructions from memory 1404), wherein the virtual object is a display screen or an operation screen of the application stored in the memory (Furtwangler at FIGS. 2A-2B with application 210h being a web browser produced in panel 204 at [0046]-[0047]).
Regarding claim 3, Furtwangler in view of Poulos discloses the head mounted display according to claim 1 (see above), wherein the display is a see-through display (Furtwangler at [0015] augmented reality, mixed reality, and hybrid reality, and [0045] describing overlaying virtual objects over real world images captured for headset 132 so that user may interact with real and virtual objects simultaneously).
Regarding claim 4, Furtwangler in view of Poulos discloses the head mounted display according to claim 1 (see above), further comprising: an image pickup unit (Furtwangler at FIGS. 1-2B and [0015], [0035], [0045] and camera on the headset of the user, used for tracking the direction of user) configured to take an image of an external world (Furtwangler at [0015] augmented reality, mixed reality, and hybrid reality, and [0045] describing overlaying virtual objects over real world images captured for headset 132 by a camera on the headset, so that user may interact with real and virtual objects simultaneously), wherein the processing circuit is configured to control the display to display the virtual object superimposed on the image of the external world taken by the image pickup unit (Furtwangler at [0015] augmented reality, mixed reality, and hybrid reality, “artificial reality content may generated content combined with captured content (e.g., real-world photographs)” and [0045] describing overlaying virtual objects over real world images captured for headset 132 by a camera on the headset, so that user may interact with real and virtual objects simultaneously).
Regarding claim 6, Furtwangler at discloses the head mounted display according to claim 1 (see above), further comprising: an operation key or a touch sensor (Furtwangler at FIGS. 1-2B and [0035] touch sensor of sensors 144 included in the virtual reality input device 134, or button [0012]-[0013] and [0046]), wherein the receiver is configured to receive the operation input indicated by the operation key or the touch sensor (Furtwangler at FIGS. 1-2B [0035] touch sensors 144 to generate sensor data that tracks the location of the input device 134 and position of user’s fingers, and [0046] inputting an input via the button of the virtual reality input device 134).
Regarding claim 7, Furtwangler in view of Poulos discloses the head mounted display according to claim 1 (see above), further comprising: an image pickup unit configured to take an image of a site of the user (Furtwangler at [0035] and tracking camera, placed external to the headset 132 and within line of site of the headset), wherein the receiver is configured to receive the operation input indicated by a motion of the site of the user taken by the image pickup unit (Furtwangler at FIGS. 7A-7B, 8 and at [0059]-[0061] describing gesture inputs of the user and user input device 132, and FIGS. 9A-8H with gestures made at [0062]-[0063], additionally at [0051] gesture at target application and hand movement data).
Regarding claim 11, Furtwangler discloses a head mounted display (FIG. 1, virtual reality headset 132 at [0035]) comprising:
a display (FIG. 1. Virtual reality headset 132 of client system for display to a user on a display device at [0045]);
a sensor (FIG. 1, sensor(s) 142 and [0035]) configured to specify a direction in which the head mounted display is facing in a real space (FIG. 1, 142 and [0035] sensors listed for tracking the location of the headset device 132);
a receiver (FIG. 1, virtual reality input devices 134 including sensors 144 at [0035]) configured to receive an operation input ([0005], [0012]-[0013], [0035] and FIGS. 1-2B; e.g., click a button, line of sight based on eye tracking and direction of device 132, and/or pointing of an input device 134) by a user (101) of the head mounted display (FIGS. 1-3C, virtual reality input devices 134 including sensors 144 at [0035] and [0046]-[0049] user may wear the headset 132 and use the input devices to interact with virtual reality environment 136 generated by headset 132); and
a processing circuit (FIG. 14, processor 1402, [0084]-[0088]), wherein the processing circuit is configured to:
comprise a first display control (controllers at [0005] and [0034]-[0035]) that arranges and displays a virtual object (FIGS. 2A-2B and 210a-h at [0046]-[0047]) at a first position (210h at a first position and broadly, 210 is displayed within panel 202 at [0046]-[0047], first position considered both placement of 202 and specific placement of 210h) with respect to a first direction (FIGS. 2A-2B, user 101 facing forward toward panel 202 at [0046]) of the head mounted display (132) specified by the sensor (142 at [0035] and FIGS. 2A-2B, sensor specifying direction; and [0069], when the user turns his head, the virtual reality device may adjust what is displayed to the user accordingly and render a corresponding portion of the virtual environment that is in that direction – panels 202 not fixed to headset direction, but visible sections rendered in accordance with head and headset movement),
and a second display control (controllers at [0005] and [0034]-[0035] and [0081]) that sets an operation area (FIGS. 2A-2B, 204 and [0047]) at a second position (FIGS. 2A-2B with panel 204 off to the side of panel 202 at [0047]) with respect to a second direction different from the first direction (FIGS. 2A-2B with panel 204 off to the side of panel 202 is a second direction which is differently angled from first direction with relation to the user 101 at [0047]) and arranges and displays the virtual object in the operation area (FIGS. 2A-2B and [0046]-[0047], inputting an input (e.g., clicking a button as a call operation) using the pointer of the input device 134, and moving the app 210h (noting grayed out in FIG. 2B at [0047]), the app 210h therefore, has moved, opened, and is placed in panel 204, without any input specific for moving a position of the virtual object), and
control the display (FIGS. 2A-2B and [0045]-[0047] control the system to render virtual space for display to user) to terminate displaying of the virtual object at the first position, change the arrangement position of the virtual object from the first position, to the operation area, and display, in the operation area, the virtual object that was displayed at the first position, as the second display control (FIGS. 2A-2B and [0046]-[0047], inputting an input (e.g., clicking a button as a call operation) using the pointer of the input device 134, and moving the app 210h (noting grayed out in FIG. 2B at [0047]), the app 210h therefore, has moved, opened, and is placed in panel 204, without any input specific for moving a position of the virtual object), in response to the call operation input received by the receiver, without receiving an input that specifies the arrangement position of the virtual object by the user (FIGS. 2A-2B and [0046]-[0047], inputting an input (e.g., clicking a button as a call operation) using the pointer of the input device 134, and moving the app 210h (noting grayed out in FIG. 2B at [0047]), the app 210h therefore, has moved, opened, and is placed in panel 204, without any input specific for moving a position of the virtual object), and
However, Furtwangler, in the above embodiment, Furtwangler does not explicitly discloses the system can control the display not to change an arrangement position of the virtual object from the first position even if specifying by the sensor that the head mounted display is facing the second direction different from the first direction, when the receiver does not receive a call operation input by the user; and after changing the arrangement position of the virtual object from the first position to the operation area, control the display not to change the arrangement position of the virtual object from the operation area regardless of a direction of the head mounted display, when the receiver does not receive an operation input by the user.
In another embodiment, Furtwangler discloses control the display ([0069] and virtual reality device may adjust what is displayed) not to change an arrangement position of the virtual object from the first position even if specifying by the sensor that the head mounted display is facing the second direction different from the first direction as the first display control, when the receiver does not receive a call operation input by the user ([0069], when the user turns his head, the virtual reality device may adjust what is displayed to the user accordingly and render a corresponding portion of the virtual environment that is in that direction, but does not move any objects rendered in unison with the head movement; noting: as applied to the embodiment of FIGS. 2A-2B disclosed at [0046]-[0047], panel 202 would be rendered as not fixed to headset direction, but visible sections rendered in accordance with head and headset movement; therefore, position of the object/app 210h would remain static in the environment when no input received (i.e., call operation (e.g., click selection operation by user 101 on 134)) even when head position changes).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the embodiment of FIGS. 2A-2B of Furtwangler to incorporate the static environment imaging of FIG. 11 and [0069] as disclosed by Furtwangler because such an integration/combination of elements is clearly contemplated by Furtwangler ([0093] – embodiments may include combination/permutations comprehended by one of ordinary skill) and additionally, the combination would produce the stated goal of providing the user with an intuitive experience – which gives the user a sense of “presence” or a feeling that they are actually in the virtual environment (see Furtwangler at [0014]). Therefore, a person of ordinary skill in the art would have been motivated to combine the embodiments to achieve the claimed invention and there would have been a reasonable expectation of success.
However, Furtwangler does not explicitly disclose after changing the arrangement position of the virtual object from the first position to the operation area, control the display not to change the arrangement position of the virtual object from the operation area regardless of a direction of the head mounted display, when the receiver does not receive an operation input by the user.
In the same field of endeavor, Poulos discloses after changing the arrangement position of the virtual object from the first position to the operation area, control the display not to change the arrangement position of the virtual object from the operation area regardless of a direction of the head mounted display, when the receiver does not receive an operation input by the user (FIG. 7-11, [0041]-[0042] and [0052]-[0056] describing pinned location and pinning feature of an object by user interaction which prevents the movement of the object).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the virtual display methodology of the embodiments of to incorporate the pinning capability for not moving an object as disclosed by Poulos because the references are within the same field of endeavor, namely, virtual environments with objects and display areas. The motivation to combine these references would have been to limit the intrusiveness of the virtual object on a user’s field of view (see Poulos at [0012] and [0037]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Regarding claim 12, it is similar in scope to claim 2 above; therefore, claim 12 is similarly analyzed and rejected as claim 2.
Regarding claim 13, it is similar in scope to claim 3 above; therefore, claim 13 is similarly analyzed and rejected as claim 3.
Regarding claim 14, it is similar in scope to claim 4 above; therefore, claim 14 is similarly analyzed and rejected as claim 4.
Regarding claim 16, it is similar in scope to claim 6 above; therefore, claim 16 is similarly analyzed and rejected as claim 6.
Regarding claim 17, it is similar in scope to claim 7 above; therefore, claim 17 is similarly analyzed and rejected as claim 7.
Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Furtwangler in view of Poulos as applied to claims 4 and 14 respectively, further in view of Terahata et al., US 2018/0321817 A1 (hereinafter “Terahata”).
Regarding claim 5, Furtwangler in view of Poulos discloses the head mounted display according to claim 4 (see above).
However, Furtwangler does not explicitly disclose wherein the display is a non-transmission type.
In the same field of endeavor, Terahata discloses a head mounted device and display system (FIG. 1, 100 and [0059]-[0065]) wherein the display is a non-transmission type (FIGS. 1 and 7 display 130 described as non-transmissive at [0063]-[0064] and [0114]).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the head mounted display system of Furtwangler in view of Poulos to incorporate the non-transmissive display as disclosed by Terahata because the references are within the same field of endeavor, namely, head mounted display systems with augmented reality imaging and display capable of receiving an input. The motivation to combine these references would have been to improve and increase the immerse experience of the user (see Terahata at [0114]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Regarding claim 15, it is similar in scope to claim 5 above; therefore, claim 15 is similarly analyzed and rejected as claim 5.
Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Furtwangler in view Poulos as applied to claims 7 and 17 respectively, further in view of Maciocci et al., US 2012/0249741 A1 (hereinafter “Maciocci”);
Regarding claim 8, Furtwangler in view of Poulos discloses the head mounted display according to claim 7 (see above).
However, although Furtwangler in view of Poulos discloses finger input at [0035], Furtwangler does not explicitly disclose capturing input operations wherein the site of the user is a finger.
In the same field of endeavor, Maciocci discloses a head mounted device and display system for input detection (generally, FIG. 5B and [0070] describing head mounted displays, and FIG. 37 generally) and capturing input operations wherein the site of the user is a finger (FIG. 37 and [0299]-[0308] gesture tracking camera capable of determining input by a finger 3705 for selecting an interactive element 14, various gestures at [0010] and [0070]).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the head mounted display system of Furtwangler in view of Poulos to incorporate finger input detection by Maciocci because the references are within the same field of endeavor, namely, head mounted display systems with augmented reality imaging and display capable of receiving an input. The motivation to combine these references would have been to provide intuitive interaction with virtual objects making it easy to learn for the user (see Maciocci at [0091]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Regarding claim 18, it is similar in scope to claim 8 above; therefore, claim 18 is similarly analyzed and rejected as claim 8.
Claims 9-10 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Furtwangler in view of Poulos as applied to claims 1 and 11 respectively, further in view of Sawaki, US 2018/0348987 A1 (hereinafter “Sawaki”).
Regarding claim 9, Furtwangler in view of Poulos discloses the head mounted display according to claim 1 (see above), further comprising an image pickup unit configured to take an image around an eye of the user (Furtwangler at FIGS. 1-2B at [0005] [0035] eye trackers of the headset 132, camera placed within the headset , and [0045], FIG. 11B and [0071] sensor data including eye tracking).
However, although Furtwangler in view of Poulos discloses eye tracking as a method of input ([0005], [0035]) Furtwangler does not explicitly disclose wherein the receiver is configured to receive the operation input indicated by a line of sight of the user detected by the image taken by the image pickup unit.
In the same field of endeavor, Sawaki discloses a head mounted device and display system for input detection (FIG. 1, 100, HMD 120) wherein the receiver is configured to receive the operation input indicated by a line of sight of the user detected by the image taken by the image pickup unit (Sawaki, FIGS. 1-2, 5, 14, and [0043]-[0044], [0057] [0077], [0084]-[0085] [0158] eye gaze sensor 140 detects direction of lines of site using light reflected photographed by camera 160; [0037] menu images selectable by the user, and [0170] user has performed an operation (e.g., object selection by gaze of the line of sight, Abstract – command for moving the position of an object/avatar).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the head mounted display system of Furtwangler in view of Poulos to incorporate gaze input by Sawaki because the references are within the same field of endeavor, namely, head mounted display systems with augmented reality imaging and display capable of receiving an input. The motivation to combine these references would have been to reduce the motion sickness of the VR when moving an object in a virtual space (see Sawaki [0258]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Regarding claim 10, Furtwangler in view of Poulos discloses the head mounted display according to claim 1 (see above).
However, although Furtwangler in view of Poulos discloses a microphone ([0088]) Furtwangler in view of Poulos does not explicitly disclose further comprising a microphone configured to input a voice of the user, wherein the receiver is configured to receive the operation input indicated by the voice of the user inputted by the microphone.
In the same field of endeavor, Sawaki discloses a head mounted device and display system (FIG. 1, 100 and [0059]-[0065]) further comprising a microphone (FIG. 1, microphone 170 at [0034] and [0045]) configured to input a voice of the user ([0045] and [0170]), wherein the receiver is configured to receive the operation input indicated by the voice of the user inputted by the microphone (FIG. 14, [0170] voice selection of an object/avatar, control module 1428 detecting utterance of the user 5, transmits the sound from the microphone 170; Abstract, command for moving an object, as would be understood by one of ordinary skill).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the head mounted display system of Furtwangler in view of Poulos to incorporate gaze input by Sawaki because the references are within the same field of endeavor, namely, head mounted display systems with augmented reality imaging and display capable of receiving an input. The motivation to combine these references would have been to improve intuitive input by the user based on the specific application, and to reduce the motion sickness of the VR when moving an object in a virtual space (see Sawaki [0258]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Regarding claim 19, it is similar in scope to claim 9 above; therefore, claim 19 is similarly analyzed and rejected as claim 9.
Regarding claim 20, it is similar in scope to claim 10 above; therefore, claim 20 is similarly analyzed and rejected as claim 10.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Lemay et al., US 12,475,635 A1;
Woo et al., US 11,538,443 A1;
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SARVESH J NADKARNI whose telephone number is (571)270-7562. The examiner can normally be reached 8AM-5PM M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, LunYi Lao can be reached at (571) 272-7671. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SARVESH J NADKARNI/Examiner, Art Unit 2621
/LUNYI LAO/Supervisory Patent Examiner, Art Unit 2621