DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
On pages 6-7 of the Applicant’s Response, Applicant argues with respect to claims 1 and 12 that Imai fails to teach or suggest an image streaming system that generates a first stream in a second stream from image from output from a video camera (singular). Specifically, Applicant argues that the Office Action acknowledges that the “image streams” are provided from two different monitoring units, a compartment monitoring unit 32 and ambient environment monitoring unit 33. In contrast, claim 1 requires that the first and second streams be generated from image frames that are all output from a single video camera.
The Examiner respectfully disagrees because the claims do not recite a single camera that outputs image frames as asserted by Applicant’s Representative. The claim merely recites “a video camera carried by the vehicle to output video frames;” however the claim does not explicitly disclose a that single video camera outputs the image frames. Moreover, the claim language does not preclude the use of multiples cameras as in the case of Imai’s compartment monitoring unit 32 and ambient environment monitoring unit 33. Therefore, Imai meets the claim limitation of “a video camera carried by the vehicle to output image frames”.
Applicant is reminded that during patent examination, the pending claims must be interpreted as broadly as their terms reasonably allow. In re Zletz, 893 F.2d 319, 321, 13 USPQ2d 320,322 (Fed. Cir. 1999). In determining the patentability of claims, the PTO gives claim language its broadest reasonable interpretation" consistent with the specification and claims. in re Morris, 127 F.3d 1048, 1054, 44 USPQ2d 1023, 1027 (Fed. Cir. 1997). See MPEP § 904.1. Limitations not appearing in the claims cannot be relied upon for patentability; In re Self, 671 F.2d 1344, 1348 (CCPA 1982). Particular embodiments appearing in the written description are not to be read into the claims if the claim language is broader than the embodiment; see Superguide Corp. v. DirecTV Enterprises, Inc., 358 F.3d 870, 875 (Fed.Cir. 2004).
On pages 7-10 of the Applicant’s Response, Applicant argues with respect to claim 2 that Aitken does not disclose anything about generating any streams, let alone a third stream from image frames output by a single video camera.
The Examiner again notes that the claims do not explicitly recite a single video camera as explained above. Imai teaches outputting two image streams (vehicles 30 includes passenger compartment monitoring unit 32 and ambient environment monitoring unit 33 that provide respective image streams), however fails to explicitly disclose a third stream. In this case, Aitken discloses a system and method for facilitating communication with autonomous vehicles. Specifically, Aitken discloses that a human operator associated with the autonomous vehicle assistance system can participate in a video conference with the user of the autonomous vehicle via the onboard human-machine interface ([0027]-[0028], [0075]). One of ordinary skill would recognize that a video conference conducted with a passenger of an autonomous vehicle and a human operator providing remote assistance would include generating video streams from both users. Thus, Aitken teaches the “wherein the multi-media framework is configured to generate a third stream from the image frames, the system further comprising a third consumer to receive the third stream, wherein the third consumer is selected from the group of consumers consisting of: the neural network; the live video stream; and the historical time clip storage.”
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 12 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Imai et al. (US Pub. 2021/0094567), herein referenced as Imai.
Regarding claim 1, Imai discloses “An image frame streaming system comprising: a vehicle ([0004], [0018]-[0019], Fig. 1, i.e., autonomous vehicles 30);
a video camera carried by the vehicle to output image frames ([0042]-[0047], Fig. 1, i.e., vehicles 30 includes passenger compartment monitoring unit 32 and ambient environment monitoring unit 33 that both include cameras for monitoring the passenger compartment and ambient environment);
a multi-media framework carried by the vehicle and configured to generate a first stream and a second stream from the image frames ([0042]-[0047], Figs. 1, 4, i.e., vehicles 30 includes passenger compartment monitoring unit 32 and ambient environment monitoring unit 33 that provide respective image streams);
a first consumer to receive the first stream and a second consumer to receive the second stream, wherein the first consumer and the second consumer are each selected from a group of consumers consisting of: a neural network; a live video stream presenter; and a historical time clip storage.” ([0038]-[0039], [0045], Figs. 1, 4, i.e., past image receiving unit 16 and image storage unit 35 receive and store captured images).
Regarding claim 12, Imai discloses “An image frame streaming method comprising: capturing image frames with a camera carried by a vehicle ([0042]-[0047], Fig. 1, i.e., vehicles 30 includes passenger compartment monitoring unit 32 and ambient environment monitoring unit 33 that both include cameras for monitoring the passenger compartment and ambient environment);
generating a first stream and a second stream from the image frames with a multimedia framework ([0042]-[0047], Figs. 1, 4, i.e., vehicles 30 includes passenger compartment monitoring unit 32 and ambient environment monitoring unit 33 that provide respective image streams);
transmitting the first stream to one of a neural network, a live video stream presenter and a historical time clip storage; and transmitting the second stream to another of the neural network, the live video stream presenter and the historical time clip storage.” ([0038]-[0039], [0045], Figs. 1, 4, i.e., past image receiving unit 16 and image storage unit 35 receive and store captured images).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2, 7, 13 are rejected under 35 U.S.C. 103 as being unpatentable over Imai in view of Aitken et al. (US Pub. 2019/0222986), herein referenced as Aitken.
Regarding claim 2, Imai fails to explicitly disclose “wherein the multi- media framework is configured to generate a third stream from the image frames, the system further comprising a third consumer to receive the third stream, wherein the third consumer is selected from the group of consumers consisting of: the neural network; the live video stream; and the historical time clip storage.”
Aitken teaches the technique of providing wherein the multi-media framework is configured to generate a third stream from the image frames, the system further comprising a third consumer to receive the third stream, wherein the third consumer is selected from the group of consumers consisting of: the neural network; the live video stream; and the historical time clip storage ([0027]-[0028], [0075], i.e., a human operator associated with the autonomous vehicle assistance system can participate in a video conference with the user of the autonomous vehicle via the onboard human-machine interface. This can allow the human operator to comfort the user as well as work to address the issue discovered by and/or experienced by the user).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing wherein the multi-media framework is configured to generate a third stream from the image frames, the system further comprising a third consumer to receive the third stream, wherein the third consumer is selected from the group of consumers consisting of: the neural network; the live video stream; and the historical time clip storage as taught by Aitken, to improve the remote monitoring system for autonomous vehicles of Imai for the predictable result of allowing a human operator to comfort the user as well as work to address the issue discovered by and/or experienced by the user ([0028]).
Regarding claim 7, Imai discloses “wherein the second consumer comprises the historical time clip storage, the historical time clip storage comprising: a video clip uploader carried by the vehicle and configured to receive a video clip based on the second stream… ([0020]-[0021], Fig. 1, i.e., communication unit performs various data exchange with remote monitoring apparatus);
a video clip display device to receive the video clip from the … storage and display the video clip to a person.” ([0060], i.e., an operator watches the real-time images and determines the situation in which the autonomous vehicle 30 is currently placed and provides instructions to the autonomous vehicle 30 according to the determined situation).
Imai fails to explicitly disclose a cloud-based storage configured to receive the video clip from the video clip uploader.
Aitken teaches the technique of providing a cloud-based storage configured to receive the video clip from the video clip uploader ([0017], [0019], [0025], [0038], Fig. 2, i.e., a data center can obtain sensor data, perception data, prediction data, motion planning data, and/or other data generated onboard the autonomous vehicle). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing a cloud-based storage configured to receive the video clip from the video clip uploader as taught by Aitken, to improve the remote monitoring system for autonomous vehicles of Imai for the predictable result of providing cloud infrastructure allowing scalability and flexibility.
Regarding claim 13, Imai fails to disclose “wherein the historical time clip storage comprises a cloud-based server.”
Aitken teaches the technique of providing a cloud-based server ([0017], [0019], [0025], [0038], Fig. 2, i.e., a data center can obtain sensor data, perception data, prediction data, motion planning data, and/or other data generated onboard the autonomous vehicle). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing a cloud-based server as taught by Aitken, to improve the remote monitoring system for autonomous vehicles of Imai for the predictable result of providing cloud infrastructure allowing scalability and flexibility.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Imai in view of Aitken in view of Napanda et al. (US Pub. 2022/0114888), herein referenced as Napanda.
Regarding claim 3, the combination fails to disclose “wherein the multimedia framework comprises a GStreamer pipeline-based multimedia framework.”
Napanda teaches the technique of providing wherein the multimedia framework comprises a GStreamer pipeline-based multimedia framework ([0047], [0057], Fig. 4, i.e., streaming 1951, using functions from the gstreamer library). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing wherein the multimedia framework comprises a GStreamer pipeline-based multimedia framework as taught by Napanda, to improve the remote monitoring system for autonomous vehicles of Imai for the predictable result of providing a versatile media framework that supports a wide range of media-handling components, including audio and video playback, recording, and streaming.
Claims 4, 6, 8-11, 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Imai in view of Aitken in view of Eperjesi et al. (US Pub. 2022/0137615), herein referenced as Eperjesi.
Regarding claim 4, Imai fails to disclose “wherein the first consumer comprises the live video stream presenter, the live video stream presenter comprising a web real time communication (RTC) server carried by the vehicle.”
Aitken teaches the technique of providing wherein the first consumer comprises the live video stream presenter ([0027]-[0028], [0075], i.e., a human operator associated with the autonomous vehicle assistance system can participate in a video conference with the user of the autonomous vehicle via the onboard human-machine interface. This can allow the human operator to comfort the user as well as work to address the issue discovered by and/or experienced by the user). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing wherein the first consumer comprises the live video stream presenter as taught by Aitken, to improve the remote monitoring system for autonomous vehicles of Imai for the predictable result of allowing a human operator to comfort the user as well as work to address the issue discovered by and/or experienced by the user ([0028]).
The combination still fails to disclose a web real time communication (RTC) server carried by the vehicle.
Eperjesi teaches the technique of providing a web real time communication (RTC) server carried by the vehicle ([0047], [0154], i.e., autonomous vehicle can communicate with a remote computing system via one or more networks using one or more protocols (e.g., webRTC protocol, etc.)). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing a web real time communication (RTC) server carried by the vehicle as taught by Eperjesi, to improve the remote monitoring system for autonomous vehicles of Imai for the predictable result of providing a protocol that enables web applications and sites to capture, stream, and exchange media (audio and video) and data directly between browsers without the need for intermediaries like traditional servers.
Regarding claim 6, Imai fails to disclose “wherein the live stream video presenter further comprises: a cloud-based media server to receive the first stream from the web RTC server; and a live stream display device to receive the first stream from the media server and display the live stream to a person.”
Aitken teaches the technique of providing wherein the live stream video presenter further comprises: a cloud-based media server to receive the first stream … and a live stream display device to receive the first stream from the media server and display the live stream to a person ([0027]-[0028], [0038], [0075], Fig. 1, i.e., a human operator associated with the autonomous vehicle assistance system can participate in a video conference with the user of the autonomous vehicle via the onboard human-machine interface. Further still, operations computing system 115 can be a cloud-based server system). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing wherein the live stream video presenter further comprises: a cloud-based media server to receive the first stream … and a live stream display device to receive the first stream from the media server and display the live stream to a person as taught by Aitken, to improve the remote monitoring system for autonomous vehicles of Imai for the predictable result of providing cloud infrastructure allowing scalability and flexibility and also allowing a human operator to comfort the user as well as work to address the issue discovered by and/or experienced by the user ([0028]).
The combination still fails to disclose web RTC server.
Eperjesi teaches the technique of providing web RTC server ([0047], [0154], i.e., autonomous vehicle can communicate with a remote computing system via one or more networks using one or more protocols (e.g., webRTC protocol, etc.)). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing web RTC server as taught by Eperjesi, to improve the remote monitoring system for autonomous vehicles of Imai for the predictable result of providing a protocol that enables web applications and sites to capture, stream, and exchange media (audio and video) and data directly between browsers without the need for intermediaries like traditional servers.
Regarding claim 8, Imai discloses “…wherein the second consumer comprises the historical time clip storage, the historical time clip storage to receive the second stream…” ([0038]-[0039], [0045], Figs. 1, 4, i.e., past image receiving unit 16 and image storage unit 35 receive and store captured images).
Imai fails to disclose wherein the first consumer comprises the live video stream presenter, the live video stream presenter comprising a web real time communication (RTC) server carried by the vehicle and wherein the multi-media frame work is configured to generate a third stream from the image frames, the system further comprising a third consumer to receive the third stream, wherein the third consumer comprises a neural network.
Aitken teaches the technique of providing wherein the first consumer comprises the live video stream presenter ([0027]-[0028], [0075], i.e., a human operator associated with the autonomous vehicle assistance system can participate in a video conference with the user of the autonomous vehicle via the onboard human-machine interface. This can allow the human operator to comfort the user as well as work to address the issue discovered by and/or experienced by the user). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing wherein the first consumer comprises the live video stream presenter as taught by Aitken, to improve the remote monitoring system for autonomous vehicles of Imai for the predictable result of allowing a human operator to comfort the user as well as work to address the issue discovered by and/or experienced by the user ([0028]).
The combination still fails to disclose a web real time communication (RTC) server carried by the vehicle and the multi-media frame work is configured to generate a third stream from the image frames, the system further comprising a third consumer to receive the third stream, wherein the third consumer comprises a neural network.
Eperjesi teaches the technique of proving a web real time communication (RTC) server carried by the vehicle and ([0047], [0154], i.e., autonomous vehicle can communicate with a remote computing system via one or more networks using one or more protocols (e.g., webRTC protocol, etc.)) the multi-media frame work is configured to generate a third stream from the image frames, the system further comprising a third consumer to receive the third stream, wherein the third consumer comprises a neural network ([0053], [0162], Fig. 1, i.e., remote assistance system can include one or more machine-learned models (e.g., neural networks, etc.) configured to process the composite sensor data). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of proving a web real time communication (RTC) server carried by the vehicle and the multi-media frame work is configured to generate a third stream from the image frames, the system further comprising a third consumer to receive the third stream, wherein the third consumer comprises a neural network as taught by Eperjesi, to improve the remote monitoring system for autonomous vehicles of Imai for the predictable result of providing a protocol that enables web applications and sites to capture, stream, and exchange media (audio and video) and data directly between browsers without the need for intermediaries like traditional servers and providing learning models to process sensor data and identify hazards.
Regarding claim 9, Imai discloses “wherein the historical time clip storage comprises: a video clip uploader carried by the vehicle and configured to receive a video clip based on the second stream… ([0020]-[0021], Fig. 1, i.e., communication unit performs various data exchange with remote monitoring apparatus);
a video clip display device to receive the video clip from the … storage and display the video clip to a person.” ([0060], i.e., an operator watches the real-time images and determines the situation in which the autonomous vehicle 30 is currently placed and provides instructions to the autonomous vehicle 30 according to the determined situation).
Imai fails to explicitly disclose a cloud-based storage configured to receive the video clip from the video clip uploader.
Aitken teaches the technique of providing a cloud-based storage configured to receive the video clip from the video clip uploader ([0017], [0019], [0025], [0038], Fig. 2, i.e., a data center can obtain sensor data, perception data, prediction data, motion planning data, and/or other data generated onboard the autonomous vehicle). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing a cloud-based storage configured to receive the video clip from the video clip uploader as taught by Aitken, to improve the remote monitoring system for autonomous vehicles of Imai for the predictable result of providing cloud infrastructure allowing scalability and flexibility.
Regarding claim 10, claim 10 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 6.
Regarding claim 11, Imai fails to discloses “a first web real time communication server carried by the vehicle to receive the first stream; a second video camera carried by the vehicle to output second image frames; a second multi-media framework carried by the vehicle and configured to generate a third stream and a fourth stream from the second image frames; and a second web real time communication server carried by the vehicle to receive the third stream.”
Aitken teaches the technique of providing a first …server carried by the vehicle to receive the first stream ([0049]-[0054], Fig. 1, i.e., perception system 160, a prediction system 165, a motion planning system 170, and/or other systems that cooperate to perceive the surrounding environment of the vehicle 110); a second video camera carried by the vehicle to output second image frames ([0046], Fig. 1, i.e., vehicle sensors 130 can include one more cameras); second multi-media framework carried by the vehicle and configured to generate a third stream and a fourth stream from the second image frames ([0045], i.e., the vehicle 110 can include one or more vehicle sensors 130, an autonomy computing system 135, one or more vehicle control systems 140, and other systems. One or more of these systems can be configured to communicate with one another via a communication channel); a second … server carried by the vehicle to receive the third stream ([0049]-[0054], Fig. 1, i.e., perception system 160, a prediction system 165, a motion planning system 170, and/or other systems that cooperate to perceive the surrounding environment of the vehicle 110).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing a first … server carried by the vehicle to receive the first stream; a second video camera carried by the vehicle to output second image frames; a second multi-media framework carried by the vehicle and configured to generate a third stream and a fourth stream from the second image frames; and a second … server carried by the vehicle to receive the third stream as taught by Aitken, to improve the remote monitoring system for autonomous vehicles of Imai for the predictable result of perceive the surrounding environment of the vehicle 110 and determine a motion plan for controlling the motion of the vehicle 110 accordingly ([0049]).
The combination still fails to disclose a web real time communication (RTC) server carried by the vehicle.
Eperjesi teaches the technique of providing a web real time communication (RTC) server carried by the vehicle ([0047], [0154], i.e., autonomous vehicle can communicate with a remote computing system via one or more networks using one or more protocols (e.g., webRTC protocol, etc.)). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing a web real time communication (RTC) server carried by the vehicle as taught by Eperjesi, to improve the remote monitoring system for autonomous vehicles of Imai for the predictable result of providing a protocol that enables web applications and sites to capture, stream, and exchange media (audio and video) and data directly between browsers without the need for intermediaries like traditional servers.
Regarding claim 14, claim 14 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 4.
Regarding claim 15, claim 15 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 8.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Imai in view of Aitken in view of Eperjesi and in further view of Lazar et al. (US Pub. 2022/0256253), herein referenced as Lazar.
Regarding claim 5, the combination fails to disclose “wherein the web RTC server comprises a Janus Gateway server.”
Lazar teaches the technique of providing wherein the web RTC server comprises a Janus Gateway server ([0070], [0106]-[0107], i.e., Janus WebRTC gateway). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing wherein the web RTC server comprises a Janus Gateway server as taught by Lazar, to improve the remote monitoring system for autonomous vehicles of Imai for the predictable result of providing a lightweight and modular architecture to extend and customize functionality while minimizing resource consumption.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Alexander Q Huerta whose telephone number is (571)270-3582. The examiner can normally be reached M-F 9:00 AM-5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at (571)272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALEXANDER Q HUERTA/Primary Examiner, Art Unit 2425 September 15, 2025