DETAILED ACTION
This is the second non-final Office action and is responsive to the papers filed 08/29/2025. The amendments filed on 08/29/2025 have been entered and considered by the examiner. Claims 1-20 are currently pending and examined below. Claim 18 has been amended.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see page 7, filed 08/29/2025, with respect to claim 18 have been fully considered and are persuasive. The claim rejections under 112(b) of claim 18 has been withdrawn.
Applicant’s arguments, see pages 7-8, filed 08/29/2025, with respect to the rejection(s) of claim(s) 1-20 under 35 U.S.C. 102(a)(1) and 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Estee et al. (US 20220239887 A1).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-3, 5-6, 8-10, 12-13 and 15-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Estee et al. (US 20220239887 A1; hereinafter Estee).
Regarding claim 1, Estee discloses:
An autonomous vehicle (Fig. 2: vehicle 200, [0037] FIG. 2 illustrates an example of a vehicular environment that can correspond to the space 110 in FIG. 1., [0038] autonomous operation of the vehicle 200 to enable a human or machine agent to control the vehicle 200 based on video showing what is happening outside the vehicle 200), comprising:
one or more sensors (Fig. 1: sensors 120, [0027] sensor(s) 120 can include a 360° camera 122, [0029] image sensors (e.g., an infrared camera), a radar sensor, a Light Detection and Ranging (LIDAR) sensor, an ultrasonic sensor, a gyroscope, a motion sensor, etc., [0038] an interior facing camera in combination with an in-cabin microphone can enable a vehicle occupant to engage in videoconferencing with one or more remote users); and
an autonomous driving computing device (Fig. 1: user device 130A, [0039] the user device 300 can correspond to any of the user devices 130 in FIG. 1.), comprising at least one processor ([0039] processors 310) in communication with at least one memory device ([0039] memory 320), and the at least one processor programmed to:
process sensor data received from the one or more sensors ([0037] Each of the cameras 222 is configured to capture images of the environment along a different direction);
render the processed sensor data into 3D images ([0037] Images captured by the cameras 222A to 222D can be stitched together to form an image analogous to that captured by the camera 122 in FIG. 1, for example, an image corresponding to a field of a view that is greater than 180°, possibly up to 360°.);
convert the 3D images into a video stream ([0037] Images captured by the cameras 222A to 222D can be stitched together to form an image analogous to that captured by the camera 122 in FIG. 1, for example, an image corresponding to a field of a view that is greater than 180°, possibly up to 360°…separate cameras can be used to form a video stream for distribution to user devices); and
transmit, via mobile communication, the video stream to a remote user ([0037] form a video stream for distribution to user devices).
Regarding claim 2, Estee discloses:
wherein the at least one processor is further programmed to:
render the processed sensor data by:
receiving manipulation and visualization parameters of rendering from the remote user ([0070] When a user device connects to a video sharing session (e.g., a videoconference), the direction of the virtual camera 410 may initially be set to the direction of the physical camera generating the input video (e.g., the camera 122 of FIG. 1). The user can change the direction of their virtual camera, for example, by physically rotating their device in a certain direction to make the virtual camera follow that direction.; [0073] At 706, the first user device receives information indicating a viewing direction specified by a second user.); and
adjusting 3D rendering based on the manipulation and visualization parameters ([0074] At 708, the first user device updates the output video stream based on the information indicating the viewing direction specified by the second user).
Regarding claim 3, Estee discloses:
wherein the at least one processor is further programmed to:
convert adjusted 3D images into the video stream ([0074] At 708, the first user device updates the output video stream based on the information indicating the viewing direction specified by the second user); and
transmit the video stream of the adjusted 3D images to the remote user ([0077] a user device can be configured to present a graphical user interface that includes an option to follow the POV of another user).
Regarding claim 5, Estee discloses:
wherein the at least one processor is further programmed to:
receive the manipulation and visualization parameters including panning ([0048] …The user can change the direction of their virtual camera, for example, by physically rotating their device in a certain direction to make the virtual camera follow that direction.; [0052] The viewing direction 512 can be represented as a vector 514 in a 3D coordinate system (e.g., XYZ); [0054] panning of a virtual camera).
Regarding claim 6, Estee discloses:
wherein the at least one processor is further programmed to:
receive the manipulation and visualization parameters including tilting ([0048] …The user can change the direction of their virtual camera, for example, by physically rotating their device in a certain direction to make the virtual camera follow that direction.; [0052] The viewing direction 512 can be represented as a vector 514 in a 3D coordinate system (e.g., XYZ)).
Regarding claim 8, Estee discloses:
An autonomous driving computing device (Fig. 1: user device 130A, [0039] the user device 300 can correspond to any of the user devices 130 in FIG. 1.) of an autonomous vehicle (Fig. 2: vehicle 200, [0037] FIG. 2 illustrates an example of a vehicular environment that can correspond to the space 110 in FIG. 1., [0038] autonomous operation of the vehicle 200 to enable a human or machine agent to control the vehicle 200 based on video showing what is happening outside the vehicle 200), comprising at least one processor ([0039] processors 310) in communication with at least one memory device ([0039] memory 320), and the at least one processor programmed to:
process sensor data received from one or more sensors (Fig. 1: sensors 120, [0027] sensor(s) 120 can include a 360° camera 122, [0029] image sensors (e.g., an infrared camera), a radar sensor, a Light Detection and Ranging (LIDAR) sensor, an ultrasonic sensor, a gyroscope, a motion sensor, etc., [0038] an interior facing camera in combination with an in-cabin microphone can enable a vehicle occupant to engage in videoconferencing with one or more remote users) of an autonomous vehicle ([0037] Each of the cameras 222 is configured to capture images of the environment along a different direction);
render the processed sensor data into 3D images ([0037] Images captured by the cameras 222A to 222D can be stitched together to form an image analogous to that captured by the camera 122 in FIG. 1, for example, an image corresponding to a field of a view that is greater than 180°, possibly up to 360°.);
convert the 3D images into a video stream ([0037] Images captured by the cameras 222A to 222D can be stitched together to form an image analogous to that captured by the camera 122 in FIG. 1, for example, an image corresponding to a field of a view that is greater than 180°, possibly up to 360°…separate cameras can be used to form a video stream for distribution to user devices); and
transmit, via mobile communication, the video stream to a remote user ([0037] form a video stream for distribution to user devices).
Regarding claim 9, Estee discloses:
wherein the at least one processor is further programmed to:
render the processed sensor data by:
receiving manipulation and visualization parameters of rendering from the remote user ([0070] When a user device connects to a video sharing session (e.g., a videoconference), the direction of the virtual camera 410 may initially be set to the direction of the physical camera generating the input video (e.g., the camera 122 of FIG. 1). The user can change the direction of their virtual camera, for example, by physically rotating their device in a certain direction to make the virtual camera follow that direction.; [0073] At 706, the first user device receives information indicating a viewing direction specified by a second user.); and
adjusting 3D rendering based on the manipulation and visualization parameters ([0074] At 708, the first user device updates the output video stream based on the information indicating the viewing direction specified by the second user).
Regarding claim 10, Estee discloses:
wherein the at least one processor is further programmed to:
convert adjusted 3D images into the video stream ([0074] At 708, the first user device updates the output video stream based on the information indicating the viewing direction specified by the second user); and
transmit the video stream of the adjusted 3D images to the remote user ([0077] a user device can be configured to present a graphical user interface that includes an option to follow the POV of another user).
Regarding claim 12, Estee discloses:
wherein the at least one processor is further programmed to:
receive the manipulation and visualization parameters including panning ([0048] …The user can change the direction of their virtual camera, for example, by physically rotating their device in a certain direction to make the virtual camera follow that direction.; [0052] The viewing direction 512 can be represented as a vector 514 in a 3D coordinate system (e.g., XYZ); [0054] panning of a virtual camera).
Regarding claim 13, Estee discloses:
wherein the at least one processor is further programmed to:
receive the manipulation and visualization parameters including tilting ([0048] …The user can change the direction of their virtual camera, for example, by physically rotating their device in a certain direction to make the virtual camera follow that direction.; [0052] The viewing direction 512 can be represented as a vector 514 in a 3D coordinate system (e.g., XYZ)).
Regarding claim 15, Estee discloses:
One or more non-transitory machine-readable storage media ([0039] memory 320) for manipulating visualization of sensor data ([0070] When a user device connects to a video sharing session (e.g., a videoconference), the direction of the virtual camera 410 may initially be set to the direction of the physical camera generating the input video (e.g., the camera 122 of FIG. 1). The user can change the direction of their virtual camera, for example, by physically rotating their device in a certain direction to make the virtual camera follow that direction.) of an autonomous vehicle (Fig. 2: vehicle 200, [0037] FIG. 2 illustrates an example of a vehicular environment that can correspond to the space 110 in FIG. 1., [0038] autonomous operation of the vehicle 200 to enable a human or machine agent to control the vehicle 200 based on video showing what is happening outside the vehicle 200), comprising a plurality of instructions stored thereon ([0037]) that, in response to being executed, cause a system (Fig. 1: system 100) to:
receive, via mobile communication, a video stream sent from an autonomous vehicle ([0037] Images captured by the cameras 222A to 222D can be stitched together to form an image analogous to that captured by the camera 122 in FIG. 1, for example, an image corresponding to a field of a view that is greater than 180°, possibly up to 360°…separate cameras can be used to form a video stream for distribution to user devices), wherein the video stream is generated by:
processing sensor data received from one or more sensors of the autonomous vehicle ([0037] Each of the cameras 222 is configured to capture images of the environment along a different direction);
rendering the processed sensor data into 3D images ([0037] Images captured by the cameras 222A to 222D can be stitched together to form an image analogous to that captured by the camera 122 in FIG. 1, for example, an image corresponding to a field of a view that is greater than 180°, possibly up to 360°.); and
converting the 3D images into the video stream ([0037] form a video stream for distribution to user devices).
Regarding claim 16, Estee discloses:
wherein the plurality of instructions further cause the system to:
receive manipulation and visualization parameters of rendering from a remote user ([0070] When a user device connects to a video sharing session (e.g., a videoconference), the direction of the virtual camera 410 may initially be set to the direction of the physical camera generating the input video (e.g., the camera 122 of FIG. 1). The user can change the direction of their virtual camera, for example, by physically rotating their device in a certain direction to make the virtual camera follow that direction.; [0073] At 706, the first user device receives information indicating a viewing direction specified by a second user.); and
transmit the manipulation and visualization parameters to the autonomous vehicle, wherein the autonomous vehicle is configured to adjust 3D rendering based on the manipulation and visualization parameters ([0074] At 708, the first user device updates the output video stream based on the information indicating the viewing direction specified by the second user).
Regarding claim 17, Estee discloses:
wherein the plurality of instructions further cause the system to:
receive an adjusted video stream from the autonomous vehicle ([0077] a user device can be configured to present a graphical user interface that includes an option to follow the POV of another user), wherein the adjusted video stream is generated by:
converting adjusted 3D images into the adjusted video stream ([0074] At 708, the first user device updates the output video stream based on the information indicating the viewing direction specified by the second user).
Regarding claim 18, Estee discloses:
wherein the plurality of instructions further cause the system to:
receive the manipulation and visualization parameters including at least one of zooming or panning ([0048] …The user can change the direction of their virtual camera, for example, by physically rotating their device in a certain direction to make the virtual camera follow that direction.; [0052] The viewing direction 512 can be represented as a vector 514 in a 3D coordinate system (e.g., XYZ); [0054] panning of a virtual camera).
Regarding claim 19, Estee discloses:
wherein the plurality of instructions further cause the system to:
receive the manipulation and visualization parameters including tilting ([0048] …The user can change the direction of their virtual camera, for example, by physically rotating their device in a certain direction to make the virtual camera follow that direction.; [0052] The viewing direction 512 can be represented as a vector 514 in a 3D coordinate system (e.g., XYZ)).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 4, 7, 11, 14 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Estee in view of Okumura et al. (US 20160139594 A1; hereinafter Okumura).
Regarding claim 4, Estee does not specifically disclose:
wherein the at least one processor is further programmed to:
receive the manipulation and visualization parameters including zooming.
However, Okumura discloses:
wherein the at least one processor is further programmed to:
receive the manipulation and visualization parameters including zooming (remote operator remotely controls sensors 130 to zoom in or out; [0029]).
Estee and Okumura are considered to be analogous to the claimed invention because they are in the same field of vehicle remote operation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Estee’s vehicle remote operation to further incorporate Okumura’s vehicle remote operation for the advantage of zooming video in or out and sending limited subset of the captured data needed to perform the remote operation which results in bandwidth conservation (Okumura’s [0010]).
Regarding claim 7, Estee does not specifically disclose:
wherein the at least one processor is further programmed to:
receive the manipulation and visualization parameters including changing a source of the sensor data.
However, Okumura discloses:
wherein the at least one processor is further programmed to:
receive the manipulation and visualization parameters including changing a source of the sensor data (remote operation can collect different or additional data of the sensors 130 that are remotely controlled by the remote operator; [0028]-[0035]).
Estee and Okumura are considered to be analogous to the claimed invention because they are in the same field of vehicle remote operation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Estee’s vehicle remote operation to further incorporate Okumura’s vehicle remote operation for the advantage of sending limited subset of the captured data needed to perform the remote operation which results in bandwidth conservation (Okumura’s [0010]).
Regarding claim 11, Estee does not specifically disclose:
wherein the at least one processor is further programmed to:
receive the manipulation and visualization parameters including zooming.
However, Okumura discloses:
wherein the at least one processor is further programmed to:
receive the manipulation and visualization parameters including zooming (remote operator remotely controls sensors 130 to zoom in or out; [0029]).
Estee and Okumura are considered to be analogous to the claimed invention because they are in the same field of vehicle remote operation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Estee’s vehicle remote operation to further incorporate Okumura’s vehicle remote operation for the advantage of zooming video in or out and sending limited subset of the captured data needed to perform the remote operation which results in bandwidth conservation (Okumura’s [0010]).
Regarding claim 14, Estee does not specifically disclose:
wherein the at least one processor is further programmed to:
receive the manipulation and visualization parameters including changing a source of the sensor data.
However, Okumura discloses:
wherein the at least one processor is further programmed to:
receive the manipulation and visualization parameters including changing a source of the sensor data (remote operation can collect different or additional data of the sensors 130 that are remotely controlled by the remote operator; [0028]-[0035]).
Estee and Okumura are considered to be analogous to the claimed invention because they are in the same field of vehicle remote operation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Estee’s vehicle remote operation to further incorporate Okumura’s vehicle remote operation for the advantage of sending limited subset of the captured data needed to perform the remote operation which results in bandwidth conservation (Okumura’s [0010]).
Regarding claim 20, Estee does not specifically disclose:
wherein the plurality of instructions further cause the system to:
receive the manipulation and visualization parameters including changing a source of the sensor data.
However, Okumura discloses:
wherein the plurality of instructions further cause the system to:
receive the manipulation and visualization parameters including changing a source of the sensor data (remote operation can collect different or additional data of the sensors 130 that are remotely controlled by the remote operator; [0028]-[0035]).
Estee and Okumura are considered to be analogous to the claimed invention because they are in the same field of vehicle remote operation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Estee’s vehicle remote operation to further incorporate Okumura’s vehicle remote operation for the advantage of sending limited subset of the captured data needed to perform the remote operation which results in bandwidth conservation (Okumura’s [0010]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAYSUN WU whose telephone number is (571)272-1528. The examiner can normally be reached Monday-Friday 8AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hunter Lonsberry can be reached at (571)272-7298. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PAYSUN WU/Examiner, Art Unit 3665
/DONALD J WALLACE/Primary Examiner, Art Unit 3665