DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The Amendment filed 09/22/2025 has been entered. Claims 1-2, 4- 9, and 11-14 are pending in this application.
Claims 1, 2, 4- 5, 7- 9, 11- 12, and 14 have been amended. Claims 3, and 10 are cancelled.
Response to Arguments
Applicant's arguments filed 09/22/2025have been fully considered but they are not persuasive
In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007).
Main Argument (pages 7- 8) under Claim Rejections under 35 U.S.C. § 103. The applicant asserts that:
Watanabe is deficient with respect to the foregoing features. For example, Watanabe discloses a process for determining an error of the first camera by a processor transmitting information indicating the error to the second sensor data generating device via the output interface. In contrast, when there is no response signal transmitted from the communication I/F/,
in the Applicant's disclosure, it is determined that the first image processing unit has failed. Therefore, determining an error of the first camera by transmitting error information, as in Watanabe, is not properly equated to the determining the error of the first image processing unit as recited in independent claim 1.
In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., the communication I/F/ 16a constituting the above-described first image processing unit 3a, and detects an abnormality of the counterpart. That is, when no response signal is transmitted from the communication I/F/ 16a constituting the first image processing unit 3a even though a monitoring signal is transmitted from the communication I/F/ 16b constituting the first image processing unit 3a, it is determined that the first image processing unit 3a has failed) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2, 4, 8- 9, and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Kazuhiko Kobayashi (US 20190369635 A1) (hereinafter Kobayashi) in view of Gaby Hayon (hereinafter Hayon) (US 20170010109 A1) in view of Anuj Kapuria (US 20180257560 A1) (hereinafter Kapuria) further in view of Sally-Anne Palmer (US 20100026811 A1) (hereinafter Palmer) further in view of Shigeyuki Watanabe (US 20190355133 A1) (hereinafter Watanabe):
Regarding Claim 1, Kobayashi teaches an image processing apparatus that recognizes a recognition target based on image data obtained by imaging an outside world using a first imaging device and a second imaging device (“Images, such as two-dimensional frame images, captured by the respective cameras 32 w, 32 n, and 32 t are used for recognizing lane markers on a scheduled road on which the vehicle V is scheduled to travel, and for recognizing objects existing in the surrounding region around the vehicle V.” [0082]) installed to be spaced apart from the first imaging device in a vertical direction from an interior of a vehicle via a window glass (Fig.2: the cameras (32n, 32t, 32n) are spaced vertically and are attached to the windshield of the vehicle ),
Kobayashi does not explicitly teach the following limitations; however, in an analogous art, Hayon teaches a first image processing unit that recognizes a first recognition target based on image data of the first imaging device (“The second processing device may receive images from main camera and perform vision processing to detect other vehicles, pedestrians, lane marks, traffic signs, traffic lights, and other road objects. “[0323]); and
a second image processing unit that recognizes a second recognition target … (“a first processing device may receive images from both the main camera and the narrow field of view camera, and perform vision processing of the narrow FOV camera to, for example, detect other vehicles, pedestrians, lane marks, traffic signs, traffic lights, and other road objects. “[0322]: Examiner note: both cameras have different FOV with different objects),
… the second image processing unit recognizes the first recognition target (“in a three camera system, a first processing device may receive images from both the main camera and the narrow field of view camera, and perform vision processing of the narrow FOV camera to, for example, detect other vehicles, pedestrians, lane marks, traffic signs, traffic lights, and other road objects.” [0322]; the processor receives images from both cameras and detects objects, and detects different objects).
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to improve vehicle safety (Hayon [0345]).
Hayon does not explicitly teach the following limitations; however, in an analogous art, Kapuria teaches … different from the first recognition target based on image data of the first imaging device and the second imaging device (“The first camera 102, is adapted to capture activity or objects like pedestrian 206 on the road 204 that is in close range vicinity. The first camera 102 does this in real-time. However, it is to be appreciated that the first camera 102 works in low speed ranges like 0-120 mph. The second camera 104, is adapted to capture objects or activity in long-range vicinity of the vehicle 202 like a speed sign 208 as shown in the environment 200” [0029]).
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to add the different recognition targets of Kapuria to improve the warning signals to the driver (Kapuria [0001]).
Kapuria does not explicitly teach the following limitations; however, in an analogous art, Palmer teaches when a predetermined condition is satisfied, the image processing apparatus shifts to a degeneration mode (“1A camera 102 streams video data 103 over a network 110 (such as a TCP/IP network or other type of network), and the predetermined conditions are met in the event that camera server 104 is unresponsive to module 107 over that network. It will be appreciated that the operational characteristics discussed above apply equally to networked and non-networked environments “[0060]; “Process 503 includes assessing input indicative health changes of camera servers and, where appropriate, configuring the secondary camera server to take over from a failed primary camera server. In taking over, the cameras initially assigned to the failed primary camera server are automatically reassigned to the secondary server.” [0111]).
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to add the different recognition targets of Kapuria to add the fail-safe of Palmer to improve video data management (Palmer [0007]).
Palmer does not explicitly teach the following limitations; however, in an analogous art, Watanabe teaches the first image processing unit or the second image processing unit determines a failure of the first imaging device (“when determining an error of the first camera 101, the processor 12 of the first sensor data generating device 1A transmits the information indicating the error to the second sensor data generating device 1B via the output interface 13. The information is inputted to the input interface 11 of the second sensor data generating device 1B. Similarly, when determining the error of the second camera 102, the processor 12 of the second sensor data generating device 1B transmits the information indicating the error to the first sensor data generating device 1A via the output interface 13.” [0084]),
the second image processing unit determines a failure of the first image processing unit (“when determining an error of the first camera 101, the processor 12 of the first sensor data generating device 1A transmits the information indicating the error to the second sensor data generating device 1B via the output interface 13. The information is inputted to the input interface 11 of the second sensor data generating device 1B. Similarly, when determining the error of the second camera 102, the processor 12 of the second sensor data generating device 1B transmits the information indicating the error to the first sensor data generating device 1A via the output interface 13.” [0084]).
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to add the different recognition targets of Kapuria to add the fail-safe of Palmer to add the camera failure detection of Watanabe to improve increase sensor efficiency (Watanabe [0005]).
Regarding Claim 2 Kobayashi in view of Hayon, Kapuria, Palmer, and Watanabe teach the image processing apparatus according to claim 1. Palmer further teaches wherein the predetermined condition is the failure of the first imaging device or the failure of the first image processing unit (“the failure of a camera server, cameras assigned to that server are automatically reassigned to a backup camera server.” [0051])
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to add the different recognition targets of Kapuria to add the fail-safe of Palmer to improve video data management (Palmer [0007]).
Regarding Claim 4 Kobayashi in view of Hayon, Kapuria, Palmer, and Watanabe teach the image processing apparatus according to claim 2. Hayon further teaches the second image processing unit recognizes the first recognition target based on image data of the second imaging device (“a first processing device may receive images from both the main camera and the narrow field of view camera, and perform vision processing of the narrow FOV camera to, for example, detect other vehicles, pedestrians, lane marks, traffic signs, traffic lights, and other road objects.” [0322]; Note: the processing device gets images from both cameras and detects the objects from the images received)
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to improve vehicle safety (Hayon [0345]).
Hayon does not explicitly teach the following limitations; however, in an analogous art, Palmer teaches wherein when the predetermined condition is satisfied, the image processing apparatus shifts to the degeneration mode (“1A camera 102 streams video data 103 over a network 110 (such as a TCP/IP network or other type of network), and the predetermined conditions are met in the event that camera server 104 is unresponsive to module 107 over that network. It will be appreciated that the operational characteristics discussed above apply equally to networked and non-networked environments “[0060]; “Process 503 includes assessing input indicative health changes of camera servers and, where appropriate, configuring the secondary camera server to take over from a failed primary camera server. In taking over, the cameras initially assigned to the failed primary camera server are automatically reassigned to the secondary server.” [0111]).
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to add the different recognition targets of Kapuria to add the fail-safe of Palmer to video data management (Palmer [0007]).
Regarding Claim 8, Kobayashi teaches an image processing system that images an outside world from an interior of a vehicle via a window glass to recognize a recognition target (“Images, such as two-dimensional frame images, captured by the respective cameras 32 w, 32 n, and 32 t are used for recognizing lane markers on a scheduled road on which the vehicle V is scheduled to travel, and for recognizing objects existing in the surrounding region around the vehicle V.” [0082]), the image processing system comprising:
a first imaging device (Fig.2: the cameras (32n, 32t, 32n) are spaced vertically and are attached to the windshield of the vehicle);
a second imaging device installed to be spaced apart from the first imaging device in a vertical direction (Fig.2: the cameras (32n, 32t, 32n) are spaced vertically and are attached to the windshield of the vehicle);
Kobayashi does not explicitly teach the following limitations; however, in an analogous art, Hayon teaches wherein the first image processing unit recognizes a first recognition target based on image data of the first imaging device (“The second processing device may receive images from main camera and perform vision processing to detect other vehicles, pedestrians, lane marks, traffic signs, traffic lights, and other road objects. “[0323]),
the second image processing unit recognizes a second recognition target … (“a first processing device may receive images from both the main camera and the narrow field of view camera, and perform vision processing of the narrow FOV camera to, for example, detect other vehicles, pedestrians, lane marks, traffic signs, traffic lights, and other road objects. “[0322]: Examiner note: both cameras have different FOV with different objects),
…, the second image processing unit recognizes the first recognition target (“a first processing device may receive images from both the main camera and the narrow field of view camera, and perform vision processing of the narrow FOV camera to, for example, detect other vehicles, pedestrians, lane marks, traffic signs, traffic lights, and other road objects. “[0322]: Examiner note: both cameras have different FOV with different objects and different objects are detected from different images).
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to improve vehicle safety (Hayon [0345]).
Hayon does not explicitly teach the following limitations; however, in an analogous art, Kapuria teaches … different from the first recognition target based on image data of the first imaging device and the second imaging device (“The first camera 102, is adapted to capture activity or objects like pedestrian 206 on the road 204 that is in close range vicinity. The first camera 102 does this in real-time. However, it is to be appreciated that the first camera 102 works in low speed ranges like 0-120 mph. The second camera 104, is adapted to capture objects or activity in long-range vicinity of the vehicle 202 like a speed sign 208 as shown in the environment 200” [0029]), …
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to add the different recognition targets of Kapuria to improve the warning signals to the driver (Kapuria [0001]).
Kapuria does not explicitly teach the following limitations; however, in an analogous art, Palmer teaches when a predetermined condition is satisfied, the image processing system shifts to a degeneration mode (“1A camera 102 streams video data 103 over a network 110 (such as a TCP/IP network or other type of network), and the predetermined conditions are met in the event that camera server 104 is unresponsive to module 107 over that network. It will be appreciated that the operational characteristics discussed above apply equally to networked and non-networked environments “[0060]; “Process 503 includes assessing input indicative health changes of camera servers and, where appropriate, configuring the secondary camera server to take over from a failed primary camera server. In taking over, the cameras initially assigned to the failed primary camera server are automatically reassigned to the secondary server.” [0111]).
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to add the different recognition targets of Kapuria to add the fail-safe of Palmer to improve video data management (Palmer [0007]).
Palmer does not explicitly teach the following limitations; however, in an analogous art, Watanabe teaches a first image processing unit electrically connected to at least the first imaging device (Fig. 7; Note: camera 101 is connect to the processor 12 in 1A); and
a second image processing unit electrically connected to the first imaging device and the second imaging device (Fig. 7; Note: camera 101 and camera 102 are connect to the processors 12 in first sensor data generating device 1A, and second sensor data generating device 1B);
the first image processing unit or the second image processing unit determines a failure of the first imaging device (“when determining an error of the first camera 101, the processor 12 of the first sensor data generating device 1A transmits the information indicating the error to the second sensor data generating device 1B via the output interface 13. The information is inputted to the input interface 11 of the second sensor data generating device 1B. Similarly, when determining the error of the second camera 102, the processor 12 of the second sensor data generating device 1B transmits the information indicating the error to the first sensor data generating device 1A via the output interface 13.” [0084]),
the second image processing unit determines a failure of the first image processing unit (“when determining an error of the first camera 101, the processor 12 of the first sensor data generating device 1A transmits the information indicating the error to the second sensor data generating device 1B via the output interface 13. The information is inputted to the input interface 11 of the second sensor data generating device 1B. Similarly, when determining the error of the second camera 102, the processor 12 of the second sensor data generating device 1B transmits the information indicating the error to the first sensor data generating device 1A via the output interface 13.” [0084]).
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to add the different recognition targets of Kapuria to add the fail-safe of Palmer to add the camera failure detection of Watanabe to improve increase sensor efficiency (Watanabe [0005]).
Regarding Claim 9 Kobayashi in view of Hayon, Kapuria, Palmer, and Watanabe teach the image processing system according to claim 8. Palmer further teaches wherein the predetermined condition is the failure of the first imaging device or the failure of the first image processing unit (“the failure of a camera server, cameras assigned to that server are automatically reassigned to a backup camera server.” [0051])
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to add the different recognition targets of Kapuria to add the fail-safe of Palmer to improve video data management (Palmer [0007]).
Regarding Claim 11 Kobayashi in view of Hayon, Kapuria, Palmer, and Watanabe teach the image processing system according to claim 9. Hayon further teaches the second image processing unit recognizes the first recognition target based on image data of the second imaging device (“a first processing device may receive images from both the main camera and the narrow field of view camera, and perform vision processing of the narrow FOV camera to, for example, detect other vehicles, pedestrians, lane marks, traffic signs, traffic lights, and other road objects.” [0322]; Note: the processing device gets images from both cameras and detects the objects from the images received)
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to improve vehicle safety (Hayon [0345]).
Hayon does not explicitly teach the following limitations; however, in an analogous art, Palmer teaches wherein when the predetermined condition is satisfied, the image processing system shifts to the degeneration mode (“1A camera 102 streams video data 103 over a network 110 (such as a TCP/IP network or other type of network), and the predetermined conditions are met in the event that camera server 104 is unresponsive to module 107 over that network. It will be appreciated that the operational characteristics discussed above apply equally to networked and non-networked environments “[0060]; “Process 503 includes assessing input indicative health changes of camera servers and, where appropriate, configuring the secondary camera server to take over from a failed primary camera server. In taking over, the cameras initially assigned to the failed primary camera server are automatically reassigned to the secondary server.” [0111]).
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to add the different recognition targets of Kapuria to add the fail-safe of Palmer to video data management (Palmer [0007]).
Claims 5- 6, and 12- 13 are rejected under 35 U.S.C. 103 as being unpatentable over Kazuhiko Kobayashi (US 20190369635 A1) (hereinafter Kobayashi) in view of Gaby Hayon (hereinafter Hayon) (US 20170010109 A1) in view of Anuj Kapuria (US 20180257560 A1) (hereinafter Kapuria) further in view of Sally-Anne Palmer (US 20100026811 A1) (hereinafter Palmer) in view of Shigeyuki Watanabe (US 20190355133 A1) (hereinafter Watanabe) further in view of Koichi Sassa (US 20210302571 A1) (hereinafter Sassa):
Regarding Claim 5 Kobayashi in view of Hayon, Kapuria, Palmer, and Watanabe teach the image processing apparatus according to claim 2. Hayon further teaches wherein the second recognition target includes at least an object (“a first processing device may receive images from both the main camera and the narrow field of view camera, and perform vision processing of the narrow FOV camera to, for example, detect other vehicles, pedestrians, lane marks, traffic signs, traffic lights, and other road objects.” [0322]; Note: the processing device gets images from both cameras and detects the objects from the images received).
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to improve vehicle safety (Hayon [0345]).
Hayon does not explicitly teach the following limitations; however, in an analogous art, Sassa teaches …existing at a predetermined height or higher above a road surface. (“the identification processing unit 740 identifies the object to be detected as an object such as the curbstone C having a height lower than the predetermined height. [0113]”).
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to add the different recognition targets of Kapuria to add the fail-safe of Palmer to add the camera failure detection of Watanabe to add the threshold height detection of Sassa to improve accuracy of the determination results (Sassa [0121]).
Regarding Claim 6 Kobayashi in view of Hayon, Kapuria, Palmer, Watanabe, and Sassa teach the image processing apparatus according to claim 5. Kapuria further teaches wherein the first recognition target includes a lane, a vehicle, a two-wheeled vehicle, and a pedestrian, (“The first camera 102, is adapted to capture activity or objects like pedestrian 206 on the road 204 that is in close range vicinity.” [0029]) and
the second recognition target includes any of a display state of a traffic light, a road sign, a free space that is an area where there is no obstacle when an own vehicle moves, and a 3D sensing distance (“The second camera 104, is adapted to capture objects or activity in long-range vicinity of the vehicle 202 like a speed sign 208 as shown in the environment 200.” [0029]).
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to add the different recognition targets of Kapuria to improve the warning signals to the driver (Kapuria [0001]).
Regarding Claim 12 Kobayashi in view of Hayon, Kapuria, Palmer, and Watanabe teach the image processing system according to claim 9. Hayon further teaches wherein the second recognition target includes at least an object (“a first processing device may receive images from both the main camera and the narrow field of view camera, and perform vision processing of the narrow FOV camera to, for example, detect other vehicles, pedestrians, lane marks, traffic signs, traffic lights, and other road objects.” [0322]; Note: the processing device gets images from both cameras and detects the objects from the images received).
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to improve vehicle safety (Hayon [0345]).
Hayon does not explicitly teach the following limitations; however, in an analogous art, Sassa teaches …existing at a predetermined height or higher above a road surface. (“the identification processing unit 740 identifies the object to be detected as an object such as the curbstone C having a height lower than the predetermined height. [0113]”).
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to add the different recognition targets of Kapuria to add the fail-safe of Palmer to add the camera failure detection of Watanabe to add the threshold height detection of Sassa to improve accuracy of the determination results (Sassa [0121]).
Regarding Claim 13 Kobayashi in view of Hayon, Kapuria, Palmer, Watanabe, and Sassa teach the image processing system according to claim 12. Kapuria further teaches wherein the first recognition target includes a lane, a vehicle, a two-wheeled vehicle, and a pedestrian, (“The first camera 102, is adapted to capture activity or objects like pedestrian 206 on the road 204 that is in close range vicinity.” [0029]) and
the second recognition target includes any of a display state of a traffic light, a road sign, a free space that is an area where there is no obstacle when an own vehicle moves, and a 3D sensing distance (“The second camera 104, is adapted to capture objects or activity in long-range vicinity of the vehicle 202 like a speed sign 208 as shown in the environment 200.” [0029]).
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to add the different recognition targets of Kapuria to improve the warning signals to the driver (Kapuria [0001]).
Claims 7, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Kazuhiko Kobayashi (US 20190369635 A1) (hereinafter Kobayashi) in view of Gaby Hayon (hereinafter Hayon) (US 20170010109 A1) in view of Anuj Kapuria (US 20180257560 A1) (hereinafter Kapuria) further in view of Sally-Anne Palmer (US 20100026811 A1) (hereinafter Palmer) in view of Shigeyuki Watanabe (US 20190355133 A1) (hereinafter Watanabe) further in view of Tomomi Hase (US 20200079324 A1) (hereinafter Hase):
Regarding Claim 7 Kobayashi in view of Hayon, Kapuria, Palmer, and Watanabe teach the image processing apparatus according to claim 2. Palmer further teaches wherein when the predetermined condition is satisfied, the image processing apparatus shifts to the degeneration mode (“1A camera 102 streams video data 103 over a network 110 (such as a TCP/IP network or other type of network), and the predetermined conditions are met in the event that camera server 104 is unresponsive to module 107 over that network. It will be appreciated that the operational characteristics discussed above apply equally to networked and non-networked environments “[0060]; “Process 503 includes assessing input indicative health changes of camera servers and, where appropriate, configuring the secondary camera server to take over from a failed primary camera server. In taking over, the cameras initially assigned to the failed primary camera server are automatically reassigned to the secondary server.” [0111]), …
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to add the different recognition targets of Kapuria to add the fail-safe of Palmer to video data management (Palmer [0007]).
Palmer does not explicitly teach the following limitations; however, in an analogous art, Hase teaches a wiper on the window glass is stopped at a position not to obstruct an imaging field of view of the second imaging device (“By controlling the electric power supply section to cause the wiper blade to stop in a region of the windshield outside of the to-be-imaged region based on the wiping speed and position of the wiper blade, the wiper drive device can stop the wiper blade so that the wiper blade does not block the field of view of the camera.” [0157]).
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to add the different recognition targets of Kapuria to add the fail-safe of Palmer to add the camera failure detection of Watanabe to add the windshield wiper control of Hase to clear the obstruction of the field of view of the camera for clearer images (Hase [0025]).
Regarding Claim 14 Kobayashi in view of Hayon, Kapuria, Palmer, and Watanabe teach the image processing system according to claim 9. Palmer further teaches wherein when the predetermined condition is satisfied, the image processing apparatus shifts to the degeneration mode (“1A camera 102 streams video data 103 over a network 110 (such as a TCP/IP network or other type of network), and the predetermined conditions are met in the event that camera server 104 is unresponsive to module 107 over that network. It will be appreciated that the operational characteristics discussed above apply equally to networked and non-networked environments “[0060]; “Process 503 includes assessing input indicative health changes of camera servers and, where appropriate, configuring the secondary camera server to take over from a failed primary camera server. In taking over, the cameras initially assigned to the failed primary camera server are automatically reassigned to the secondary server.” [0111]), …
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to add the different recognition targets of Kapuria to add the fail-safe of Palmer to video data management (Palmer [0007]).
Palmer does not explicitly teach the following limitations; however, in an analogous art, Hase teaches a wiper on the window glass is stopped at a position not to obstruct an imaging field of view of the second imaging device (“By controlling the electric power supply section to cause the wiper blade to stop in a region of the windshield outside of the to-be-imaged region based on the wiping speed and position of the wiper blade, the wiper drive device can stop the wiper blade so that the wiper blade does not block the field of view of the camera.” [0157]).
It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the vertical cameras of the vehicle disclosed by Kobayashi to add the object detection of Hayon to add the different recognition targets of Kapuria to add the fail-safe of Palmer to add the camera failure detection of Watanabe to add the windshield wiper control of Hase to clear the obstruction of the field of view of the camera for clearer images (Hase [0025]).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAHMOUD KAMAL ABOUZAHRA whose telephone number is (703)756-1694. The examiner can normally be reached M-F 7:00 AM to 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie Atala can be reached at (571) 272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MAHMOUD KAMAL ABOUZAHRA/ Examiner, Art Unit 2486
/JAMIE J ATALA/ Supervisory Patent Examiner, Art Unit 2486