DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Joint Inventors
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Priority / Domestic Benefit
The instant application claims the benefit of earlier filing from provisional application 63/599,798, filed on 16 November 2023. Effective filing date is determined on a claim-by-claim basis. The examiner reviewed the provisional application and found that support exists for the following claims:
1, 2, 4, 5, 6, 13, 14, 15, 16, 19
Support from the provisional application (considering both the written description and the drawings) is not provided for the following claims, leading to a later as-filed effective filing date of 15 November 2024:
3, 7, 8, 9, 10, 11, 12, 17, 18
In the event that the examiner provides intervening art in a prior art rejection, the examiner respectfully requests applicant to explicitly and clearly point out where support from the provisional application of note is located.
Claim to domestic benefit for certain claims supported by the provisional application noted above is acknowledged as requirements of 37 CFR 1.78 and 35 U.S.C. 119(e) are met.
Information Disclosure Statement
The information disclosure statement (IDS) filed on 07 January 2026 complies with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 9 and 10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 9 states “to reflect a condition of the robot.” In ascertaining the meaning of “a condition”, the examiner consulted the specification and only found the teachings in paragraph [0056] which discusses “to reflect a condition of the robot 100. For example, in response to the nozzle 120 of the robot 100 being repositioned to aim in a particular direction, the processing device 130 updates the directional indicator 852 to reflect the movement of the nozzle 120.” The examiner notes that the description found above is insufficient to accurately and explicitly describe what is claimed by “a condition”. “A condition” could refer to any number of other “conditions”, such as wheel position, speed, error status, configuration of the robot (attachments, wheel count, etc.), communication status with other robots or control devices, among an infinite number of other possibilities. As “a condition” does not have a well-understood meaning in this case and the specification does not adequately describe what is explicitly claimed in this case, the term “a condition” renders the claim indefinite.
Therefore, the examiner notes that this phrase is indefinite and fails to particularly point out and distinctly claim the invention of the instant application. Consistent with USPTO examination practices, for purposes of compact prosecution, the claim limitations will be treated as best understood by the Examiner, which according to broadest reasonable interpretation (BRI), would mean that the examiner could follow any one or more of the interpretations discussed above.
As claim 10 depends upon claim 9 but does not further limit the subject matter to resolve the noted issue, the claim is rejected due to dependency.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-5, 9, 13-17, and 19 are rejected under 35 U.S.C. 103 as being obvious over Jiang et al. (US 2023/0316635 A1; Published 05 October 2023, hereinafter Jiang).
Regarding independent claims 1 (method), 13 (apparatus), and 19 (system): Jiang discloses A method of imaging surroundings of a firefighting robot, comprising: (per claim 1) (Paragraph [0062, 0064-0065, 0272], Jiang discloses a method of imaging a surroundings of a vehicle, the vehicle constituting both a fire truck and a robotic device/vehicle with a body. For brevity, the examiner will refer to the robot as the “vehicle”, consistent with Jiang) / A firefighting robot, comprising: a robot body; (per claim 13) (Paragraph [0178, 0272] and Figure [13], Jiang discloses a system/apparatus/method of environmental monitoring of a vehicle, the vehicle constituting both a fire truck and a robotic device/vehicle with a body. For brevity, the examiner will refer to the robot as the “vehicle”, consistent with Jiang) / A firefighting system, comprising: a firefighting robot, including: a robot body; (per claim 19) (Paragraph [0178, 0272] and Figure [13], Jiang discloses a system/apparatus/method of environmental monitoring of a vehicle, the vehicle constituting both a fire truck and a robotic device/vehicle with a body. For brevity, the examiner will refer to the robot as the “vehicle”, consistent with Jiang)
receiving images from multiple cameras mounted to the firefighting robot and facing respective directions; (per claim 1) / multiple cameras mounted to the robot body and facing respective directions relative to the robot body; (per claim 13) / multiple cameras mounted to the robot body and facing respective directions relative to the robot body; (per claim 19) (Paragraph [0118] and Figure [2A-2B, 4A, 4E, 35B], Jiang discloses receiving imaging data from four cameras on the vehicle with different but partially overlapping fields of view (respective directions))
combining the images from the cameras to construct a top-down view showing a central image of the robot and surroundings of the robot captured by the cameras; and (per claim 1) / control circuitry operatively coupled with the cameras, the control circuitry constructed and arranged to combine images from the cameras to construct a top-down view showing a central image of the robot and surroundings of the robot captured by the cameras; and (per claim 13) / control circuitry operatively coupled with the cameras, the control circuitry constructed and arranged to combine images from the cameras to construct a top-down view showing a central image of the robot and surroundings of the robot captured by the cameras; and (per claim 19) (Paragraph [0118-0119, 0136] and Figure [2A-2B, 4A, 4E, 35B], Jiang discloses receiving imaging data from four cameras on the vehicle with different but partially overlapping fields of view and stitching them to form a composite image representing a single merged 360 view. As can be seen in the figures presented (such as Figure [4E]), a top-down view showing a central image of the vehicle and its surroundings is created)
transmitting the top-down view to a control device remote from the robot for display by the control device. (per claim 1) / wireless communication circuitry constructed and arranged to transmit the top-down view for display remotely from the robot. (per claim 13) / wireless communication circuitry constructed and arranged to wirelessly transmit the top-down view for display remotely from the robot; and (per claim 19) (Paragraph [0101-0103, 0261, 0268], Jiang discloses transmitting/streaming the video/view to a remote location, such as a client device or a server used to generate a rendering of the visualization)
a remote-control device, including: a wireless interface constructed and arranged to receive the top-down view from the robot; and a screen constructed and arranged to display the top-down view. (per claim 19) (Paragraph [0075, 0101-0103, 0261, 0268, 0381] and Figure [1], Jiang discloses a remote operation interface that accepts input from a user (and thus, a control device) which receives the visualization data and generates a rendering on a display)
Regarding the above-noted applicability to a firefighting robot, the examiner notes that the explicit terminology “firefighting robot” is not used in the disclosure of Jiang. However, paragraph [0272] states “The autonomous vehicle 3500 (alternatively referred to herein as the “vehicle 3500”) may include, without limitation, … a fire truck, … a robotic vehicle …, and/or another type of vehicle (e.g., that is unmanned and/or that accommodates one or more passengers).” By way of the “and/or”, this indicates that two or more options may be satisfied concurrently and by indicating that both a “fire truck” (well known to comprise a nozzle for firefighting) and a “robotic vehicle” are included, this reasonably constitutes a robotic fire vehicle. The examiner notes that merely altering the disclosure of Jiang to specifically claim a firefighting robot as the vehicle of concern is an obvious modification of Jiang, with rationale provided from Jiang as noted above (applicability to one or more of the listed vehicles).
Regarding claims 2 and 16: Parent claims 1 and 13 are obvious over Jiang. Jiang further discloses wherein a field of view of a first camera of the cameras partly overlaps with a field of view of a second camera of the cameras, and (per claim 2) / wherein a field of view of a first camera of the cameras partly overlaps with a field of view of a second camera of the cameras, and (per claim 16) (Paragraph [0118-0119, 0136] and Figure [2A-2B, 4A, 4E, 35B], Jiang discloses receiving imaging data from four cameras on the vehicle with different but partially overlapping fields of view and stitching them to form a composite image representing a single merged 360 view. As can be seen in the figures presented (such as Figure [4E]), a top-down view showing a central image of the vehicle and its surroundings is created)
wherein combining the images from the cameras includes combining an image from the first camera with an image from the second camera. (per claim 2) / wherein the control circuitry constructed and arranged to combine the images is further constructed and arranged to combine an image from the first camera with an image from the second camera. (per claim 16) (Paragraph [0118-0119, 0136] and Figure [2A-2B, 4A, 4E, 35B], Jiang discloses receiving imaging data from four cameras on the vehicle with different but partially overlapping fields of view and stitching them to form a composite image representing a single merged 360 view. As can be seen in the figures presented (such as Figure [4E]), a top-down view showing a central image of the vehicle and its surroundings is created)
Regarding claim 3: Parent claim 2 is obvious over Jiang. Jiang further discloses wherein the image from the first camera and the image from the second camera include respective views of a calibration marker placed in an environment of the robot, and (Paragraph [0149-0152] and Figure [2A-2B, 5, 7A-7B], Jiang discloses imaging mask projections that are projected onto the environment to align the cameras)
wherein combining the images from the cameras includes aligning the respective views of the calibration marker. (Paragraph [0149-0152] and Figure [2A-2B, 5, 7A-7B], Jiang discloses imaging mask projections that are projected onto the environment to align the cameras)
Regarding claim 4: Parent claim 1 is obvious over Jiang. Jiang further discloses wherein combining the images from the cameras includes creating the top-down view as a 360-degree view around the robot. (Paragraph [0065, 0070, 0136, 0235] and Figure [4E], Jiang discloses creating a stitched 360 bird’s eye (top down) view)
Regarding claims 5 and 14: Parent claims 1 and 13 are obvious over Jiang. Jiang further discloses wherein combining the images from the cameras includes placing views of the surroundings of the robot relative to the central image of the robot, such that objects in front of the robot appear as images displayed in front of the central image, objects behind the robot appear as images displayed behind the central image, objects to the right of the robot appear as images displayed to the right of the central image, and objects to the left of the robot appear as images displayed to the left of the central image. (per claim 5) / wherein the cameras include a front camera that faces in a forward direction relative to the robot, a rear camera that faces in a rearward direction relative to the robot, a left camera that faces in a leftward direction relative to the robot, and a right camera that faces in a rightward direction relative to the robot. (per claim 14) (Paragraph [0109, 0118, 0136, 0147] and Figure [2A-2B, 4A, 4E, 35B], Jiang discloses receiving imaging data from four cameras on the vehicle with different but partially overlapping fields of view, facing forward, backward, left, and right. As seen in figure 4E, views captured from each camera are shown with the vehicle in the middle, and the images correlate to their respective position on the vehicle)
Regarding claims 9 and 17: Parent claims 1 and 13 are obvious over Jiang. Jiang further discloses further comprising dynamically adjusting the central image of the robot in the top-down view to reflect a condition of the robot. (per claim 9) / wherein the control circuitry is further constructed and arranged to dynamically adjust the central image of the robot in the top-down view to reflect a condition of the robot. (per claim 17) (Paragraph [0113] and Figure [13, 19, 23], Jiang discloses adjusting the viewport based on a driving scenario (direction of travel, for instance), based on a detected salient event, etc.)
Regarding claim 15: Parent claim 13 is obvious over Jiang. Jiang further discloses wherein the control circuitry includes an electronic control unit (ECU) of the robot, the ECU constructed and arranged to combine the images from the cameras. (Paragraph [0381], Jiang discloses an embodiment of the computing device as an ECU)
Claim 6 is rejected under 35 U.S.C. 103 as being obvious over Jiang in view of Nakagawa et al. (US 2022/0180488 A1; Published 09 Jun 2022, hereinafter Nakagawa).
Regarding claim 6: Parent claim 1 is obvious over Jiang. Jiang does not explicitly disclose a command to transmit an individual view.
However, Nakagawa, in a similar field of endeavor of vehicle camera systems, teaches further comprising receiving a command from the control device and transmitting an individual view from one of the cameras in place of the top-down view responsive to the command from the control device. (Paragraph [0077], Nakagawa teaches that the vehicle has a plurality of cameras with a plurality of views, and a user can toggle a single standalone/discrete view (the front view, side view, etc.))
Jiang and Nakagawa are in a similar field of endeavor of vehicle camera systems. It would have been obvious to one having ordinary skill in the art at the time of effective filing, with a reasonable expectation of success, to have modified the disclosure of Jiang to include a toggle for a user to select a single view as taught by Nakagawa in the interest of saving computing resources (Nakagawa, Paragraph [0077]). This constitutes a combination of known elements according to known methods to produce predictable results.
Claims 7-8 are rejected under 35 U.S.C. 103 as being obvious over Jiang in view of Tsuda (US 2023/0303000 A1; Published 28 September 2023, hereinafter Tsuda).
Regarding claim 7: Parent claim 1 is obvious over Jiang. Jiang does not explicitly disclose that the cameras are at different heights.
However, Tsuda, in a similar field of endeavor of vehicle camera systems, teaches wherein combining the images from the cameras includes combining a first image from a first camera mounted at a first height with a second image from a second camera mounted at a second height greater than the first height. (Paragraph [0028] and Figure [1], Tsuda teaches utilizing two (and thus a combination) different cameras mounted at different heights to create birds eye views)
Jiang and Tsuda are in a similar field of endeavor of vehicle camera systems. It would have been obvious to one having ordinary skill in the art at the time of effective filing, with a reasonable expectation of success, to have modified the disclosure of Jiang to include a disclosure that cameras may be located at different heights, as this is well-known in the art (mounting cameras to windshields, door mirrors, trunks, at some level of height difference) as evidenced by Tsuda. Tsuda offers the beneficial effect of providing further perspective information that lessens the negative impact of blind spots by using two separate views of the same “direction” (Tsuda, Paragraph [0030], for instance).
Regarding claim 8: Parent claim 1 is obvious over Jiang. Jiang does not explicitly disclose that the cameras are at different heights.
However, Tsuda, in a similar field of endeavor of vehicle camera systems, teaches further comprising: receiving an image from an additional camera mounted to the robot and facing a common direction as one of the cameras, the additional camera and the one of the cameras being mounted at different heights; and (Paragraph [0026, 0028, 0030] and Figure [1, 3A-4B], Tsuda teaches two cameras mounted at different heights facing a common direction (rearward))
combining the image from the additional camera with one or more images from the cameras to construct a second top-down view, the top-down view and the second top-down view showing the surroundings of the robot from different perspectives. (Paragraph [0026, 0028, 0030] and Figure [1, 3A-4B], Tsuda teaches two cameras mounted at different heights facing a common direction (rearward) to construct a plurality of bird’s eye (top down) views)
Jiang and Tsuda are in a similar field of endeavor of vehicle camera systems. It would have been obvious to one having ordinary skill in the art at the time of effective filing, with a reasonable expectation of success, to have modified the disclosure of Jiang to include a disclosure that cameras may be located at different heights, as this is well-known in the art (mounting cameras to windshields, door mirrors, trunks, at some level of height difference) as evidenced by Tsuda. Tsuda offers the beneficial effect of providing further perspective information that lessens the negative impact of blind spots by using two separate views of the same “direction” (Tsuda, Paragraph [0030], for instance).
Claims 10-11 and 18 are rejected under 35 U.S.C. 103 as being obvious over Jiang in view of Grimm et al. (US 2019/0246617 A1; Published 15 Aug 2019, hereinafter Grimm).
Regarding claims 10 and 18: Parent claims 9 and 17 are obvious over Jiang. Jiang does not explicitly disclose showing the direction of aim of a nozzle of the vehicle, however this is merely an obvious design decision.
Grimm, in a similar field of endeavor of graphical display systems, teaches wherein dynamically adjusting the central image includes, responsive to a nozzle of the robot being repositioned to aim in a direction, updating the central image to show the direction in which the nozzle is aimed. (per claim 10) / further comprising a nozzle constructed and arranged to aim in multiple directions, wherein the control circuitry constructed and arranged to dynamically adjust the central image is further constructed and arranged to update the central image to indicate a direction in which the nozzle is aimed. (per claim 18) Paragraph [0057] and Figure [5, 9-10], Grimm teaches adjusting the displayed image of a spraying apparatus with an overlay corresponding to nozzle aim direction)
Jiang and Grimm are in a similar field of endeavor of graphical display systems. It would have been obvious to one having ordinary skill in the art at the time of effective filing, with a reasonable expectation of success, to have modified the disclosure of Jiang to include further graphical elements indicating spray pattern/position (as taught by Grimm) as this is merely a matter of obvious design choice. A person having ordinary skill in the art at the time of effective filing would have easily been able to modify the disclosure of Jiang to display further additional information that is pertinent to the operator (such as nozzle information) as a matter of design choice.
Regarding claim 11: Parent claim 10 is obvious over Jiang in view of Grimm. Jiang does not explicitly disclose showing the direction of aim of a nozzle of the vehicle, however this is merely an obvious design decision.
Grimm, in a similar field of endeavor of graphical display systems, teaches while the nozzle is being repositioned to aim in the direction, updating the top-down view to include a directional indicator that identifies a movement direction of the nozzle. (Paragraph [0057] and Figure [5, 9-10], Grimm teaches adjusting the displayed image of a spraying apparatus with an overlay corresponding to nozzle aim direction)
Jiang and Grimm are in a similar field of endeavor of graphical display systems. It would have been obvious to one having ordinary skill in the art at the time of effective filing, with a reasonable expectation of success, to have modified the disclosure of Jiang to include further graphical elements indicating spray pattern/position (as taught by Grimm) as this is merely a matter of obvious design choice. A person having ordinary skill in the art at the time of effective filing would have easily been able to modify the disclosure of Jiang to display further additional information that is pertinent to the operator (such as nozzle information) as a matter of design choice.
Claim 12 is rejected under 35 U.S.C. 103 as being obvious over Jiang in view of Lee et al. (US 2017/0148136 A1; Published 25 May 2017, hereinafter Lee).
Regarding claim 12: Parent claim 1 is obvious over Jiang. Jiang does not explicitly disclose that the central views depicted in Figure [4E] (for instance) are static, however this is a matter of obvious design choice.
Lee, in a similar field of endeavor of vehicle camera systems, teaches further comprising, responsive to the robot being driven, updating the top-down view such that the central image in the top-down view remains stationary as views of the surroundings of the robot change. (Paragraph [0083-0094] and Figure [8, 10], Lee teaches an aerial view of a vehicle wherein the center (the graphical element representing the vehicle) is static despite the changing environment. As the vehicle backs into the space, the car remains centered)
Jiang and Lee are in a similar field of endeavor of vehicle camera systems. It would have been obvious to one having ordinary skill in the art at the time of effective filing, with a reasonable expectation of success, to have modified the disclosure of Jiang to explicitly provide guidance that the rendering of the vehicle in Figure [4E] (for instance) may be static, as this is merely a matter of design choice and one of a very limited number of possibilities. Lee is relied upon merely as a reference to show that the design choice noted has also been established in an analogous art prior to effective filing.
References
Further references that discuss prior art, but were not relied upon for creation of this office action are provided below:
#
Publication Number
Title
Inventor
Dates
Description of Relevance
1
US 2012/0170812 A1
DRIVING SUPPORT DISPLAY DEVICE
Kamiyama
Filed: 15 Mar 2012
Pub: 05 Jul 2012
Discusses a vehicle with a plurality of cameras mounted that are combined to form a virtual bird's eye view.
2
US 2016/0300113 A1
VEHICLE 360° SURROUND VIEW SYSTEM
HAVING CORNER PLACED CAMERAS, AND
SYSTEM AND METHOD FOR CALIBRATION
THEREOF
Molin et al.
Filed: 10 Apr 2015
Pub: 13 Oct 2016
Discusses a 360 degree surround view (bird's eye) of a vehicle using a plurality of cameras and stitching images. Further discusses calibration details.
3
US 2024/0348749 A1
CAMERA SYSTEM, AND METHOD FOR
GENERATING A VIEW USING A CAMERA
SYSTEM
Gloger et al.
Filed: 24 Jun 2022
Pub: 17 Oct 2024
Discusses establishing a surround view of a vehicle by stitching overlapping views of cameras.
4
US 8,973,671 B2
Smart Compact Indoor Firefighting Robot for Extinguishing a Fire at an Early Stage
Alsaif et al.
Filed: 04 Nov 2011
Pub: 10 Mar 2015
Discusses a firefighting robot with a camera for navigation.
5
US 2014/0036063 A1
AROUND VIEW MONITOR SYSTEM AND
MONITORING METHOD
Kim et al.
Filed: 15 Apr 2013
Pub: 06 Feb 2014
Discusses an overhead view of a vehicle from stitched images obtained from cameras wherein the view may change depending upon a driving state of the vehicle.
6
EP 3037135 B1
A firefighting vehicle comprising an extractable and turntable ladder and extendable lateral
supports, and a visual assistance system for positioning said vehicle, and method for
positioning such a firefighting vehicle
Huehn
Filed: 23 Dec 2014
Pub: 19 May 2021
Discusses a firefighting vehicle system of obtaining multiple camera views and stitching them for a bird's eye view.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN J BROSH whose telephone number is (571)270-0105. The examiner can normally be reached M-F 0730-1700.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, THOMAS WORDEN can be reached at (571)272-4876. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/B.J.B./Examiner, Art Unit 3658
/THOMAS E WORDEN/Supervisory Patent Examiner, Art Unit 3658