DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 9/11/25 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4, 9-11, 14, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Fursich (US 2019/0080185) in view of Forutanpour (US 2018/0357804).
Regarding claim 1, Fursich discloses a method for a camera monitor system (paragraph [65], fig.15B, Fursich discloses a camera monitoring system), comprising:
obtaining a first image from a first camera and a second image from a second camera (paragraph [65], fig.15B, Fursich discloses cameras 14g and 14h, wherein camera 14g is facing rearward from the driver (left) side to capture the field of view facing rearward from the driver (left) side as well as some side view, and camera 14h is facing rearward from the passenger (right) side to capture field of view facing rearward from the passenger (right) side as well as some side view), the first image and second image depicting a side of a commercial vehicle (paragraph [81], Fursich discloses that the camera system for monitoring a vehicle can also be implemented with a trailer connected to a vehicle which is similar to Applicant's paragraph [39] on page 5 of Applicant's specification where Applicant discloses connecting a trailer to a commercial vehicle) and its surrounding environment (paragraph [65], fig.15B, Fursich discloses cameras 14g and 14h, wherein camera 14g is facing rearward from the driver (left) side to capture the field of view facing rearward from the driver (left) side and camera 14h is facing rearward from the passenger (right) side to capture field of view facing rearward from the passenger (right) side, and paragraph [54], Fursich discloses that wide or wider field of view cameras can be implemented for a side view camera to capture a wide field of view of the surrounding environment including the side view and the front and rear view in that fig.13 illustrates the implementation of a wide field of view cameras, and paragraph [58], fig.13, Fursich discloses side-mounted 180 degree field of view (or wide angle field of view) cameras that have angles faced downward and sideward and cameras facing rearwardly likes cameras 14g and 14h), the first camera and the second camera having different, overlapping fields of view (paragraph [65], fig.15B, Fursich discloses cameras 14g and 14h, wherein camera 14g is facing rearward from the driver (left) side to capture the field of view facing rearward from the driver (left) side as well as some side view, and camera 14h is facing rearward from the passenger (right) side to capture field of view facing rearward from the passenger (right) side as well as some side view, and that camera 14g is different from camera 14h), and having different respective optical axes that intersect a ground plane at different respective optical angles (paragraph [54], Fursich discloses that wide or wider field of view cameras can be implemented for a side view camera to capture a wide field of view of the surrounding environment including the side view and the front and rear view in that fig.13 illustrates the implementation of a wide field of view cameras, and paragraph [58], fig.13, Fursich discloses side-mounted 180 degree field of view (or wide angle field of view) cameras that have angles faced downward and sideward and cameras facing rearwardly likes cameras 14g and 14h, thus the cameras can have different respective optical axes that intersect a ground plane since cameras can be faced downward);
performing a perspective transformation on at least one of the first image and second image to obtain an updated image set (paragraph [61], Fursich discloses a perspective transformation process takes place in that a newly combined image is formed by dewarping and stitching as illustrated in fig.14 for combining image portions obtained from plural cameras for obtaining an updated image set); and
horizontally stitching the images of the updated image set together to form a combined image (paragraph [54], Fursich discloses that wide or wider field of view cameras can be implemented for a side view camera to capture a wide field of view of the surrounding environment including the side view and the front and rear view in that fig.13 illustrates the implementation of a wide field of view cameras, and that the processing and stitching of images is performed as illustrated in fig.14, and paragraph [61], Fursich discloses that stitching of images is performed in fig.14, where the stitching line exists for permitting the stitching of images in a horizontal manner to form a combined image).
Fursich does not disclose performing a perspective transformation on at least one of the first image and second image to obtain an updated image set, such that at least one of the first image and second image are updated in the updated image set; and vertically stitching the images of the updated image set together to form a combined image. However, Forutanpour teaches performing a perspective transformation on at least one of the first image and second image to obtain an updated image set (paragraph [132], Forutanpour discloses GPU 18 blends the image content on bottom of first rectangular image with image content on top of the second rectangular image to generate a stitched rectangular image or a perspective transformation process, in that paragraph [220], Forutanpour discloses that updates can be made to process the image data obtained with the processes initiated by CPU and GPU to refresh image data by utilizing a circular addressing scheme in the buffer (memory) to update image data stored in buffer), such that at least one of the first image and second image are updated in the updated image set (paragraph [132], Forutanpour discloses GPU 18 blends the image content on bottom of first rectangular image with image content on top of the second rectangular image to generate a stitched rectangular image or a perspective transformation process, in that paragraph [220], Forutanpour discloses that updates can be made to process the image data obtained with the processes initiated by CPU and GPU to refresh image data by utilizing a circular addressing scheme in the buffer (memory) to update image data stored in buffer); and vertically stitching the images of the updated image set together to form a combined image (paragraph [132], Forutanpour discloses GPU 18 blends the image content on bottom of first rectangular image with image content on top of the second rectangular image to generate a stitched rectangular image or a perspective transformation process to form a combined rectangular image in a vertical manner, in that paragraph [220], Forutanpour discloses that updates can be made to process the image data obtained with the processes initiated by CPU and GPU to refresh image data by utilizing a circular addressing scheme in the buffer (memory) to update image data stored in buffer). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Fursich and Forutanpour together as a whole for ascertaining a better perspective view of the video image content obtained by cameras in order to permit the driver to drive and park more carefully.
Regarding claim 4, Fursich does not disclose wherein in the combined image, the first image is depicted above the second image. However, Forutanpour teaches wherein in the combined image, the first image is depicted above the second image (paragraph [132], Forutanpour discloses GPU 18 blends the image content on bottom of first rectangular image with image content on top of the second rectangular image to generate a stitched rectangular image at the boundary between the top and bottom images, thus, linking the first (above) rectangular image with second (below) rectangular image). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Fursich and Forutanpour together as a whole for ascertaining a better perspective view of the video image content obtained by cameras in order to permit the driver to drive and park more carefully.
Regarding claim 9, Fursich does not disclose wherein said vertically stitching the images of the updated image set together to form a combined image comprises: mapping a plurality of first points of the first image in the updated image set to a plurality of second points in the second image in the updated image set; aligning the first image in the updated image set and the second image in the updated image set based on the mapping; and blending the first image in the updated image set and the second image in the updated image set to form the combined image.
However, Forutanpour teaches wherein said vertically stitching the images of the updated image set together to form a combined image (paragraph [132], Forutanpour discloses GPU 18 blends the image content on bottom of first rectangular image with image content on top of the second rectangular image to generate a stitched rectangular image or a perspective transformation process to form a combined rectangular image in a vertical manner, in that paragraph [220], Forutanpour discloses that updates can be made to process the image data obtained with the processes initiated by CPU and GPU to refresh image data by utilizing a circular addressing scheme in the buffer (memory) to update image data stored in buffer) comprises: mapping a plurality of first points of the first image in the updated image set to a plurality of second points in the second image in the updated image set (paragraph [66], Forutanpour discloses that the mapping of points from the first image in the updated image is set with the points from the second image for overlaying the points of the image content from both first and second images together for properly aligning the first and second image together); aligning the first image in the updated image set and the second image in the updated image set based on the mapping (paragraph [66], Forutanpour discloses that the mapping of points from the first image in the updated image is set with the points from the second image for overlaying the points of the image content from both first and second images together for properly aligning the first and second image together); and blending the first image in the updated image set and the second image in the updated image set to form the combined image (paragraph [69], Forutanpour discloses that GPU can blend along the borders of the two rectangular images to stitch the two images together, wherein paragraph [132], Forutanpour discloses GPU 18 blends the image content on bottom of first rectangular image with image content on top of the second rectangular image to generate a stitched rectangular image or a perspective transformation process to form a combined rectangular image in a vertical manner, in that paragraph [220], Forutanpour discloses that updates can be made to process the image data obtained with the processes initiated by CPU and GPU to refresh image data by utilizing a circular addressing scheme in the buffer (memory) to update image data stored in buffer). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Fursich and Forutanpour together as a whole for ascertaining a better perspective view of the video image content obtained by cameras in order to permit the driver to drive and park more carefully.
Regarding claim 10, Fursich discloses comprising: displaying the combined image on an electronic display in the commercial vehicle in a first mode (paragraph [75], Fursich discloses that the combined image can be displayed, wherein paragraph [61], Fursich discloses that stitching of images is performed in fig.14, where the stitching line exists for permitting the stitching of images in a horizontal manner to form a combined image; paragraph [81], Fursich discloses that the camera system for monitoring a vehicle can also be implemented with a trailer connected to a vehicle which is similar to Applicant's paragraph [39] on page 5 of Applicant's specification where Applicant discloses connecting a trailer to a commercial vehicle); displaying at least one of the first image and the second image separately without combination in a second mode (paragraph [75], Fursich discloses the display of image data in a selective manner for viewing an image captured by a camera or some of the cameras, and paragraph [82], Fursich discloses the display of images captured by one or more imaging sensor for viewing by the driver, thus permitting the separate display of images captured one camera at a time); and toggling between the first mode and the second mode in response to receiving a toggle command from an occupant of the commercial vehicle (paragraph [75], Fursich discloses the display of image data in a selective manner for viewing an image captured by a camera or some of the cameras or all of the cameras, wherein paragraph [61], Fursich discloses that stitching of images is performed in fig.14, where the stitching line exists for permitting the stitching of images in a horizontal manner to form a combined image, and paragraph [82], Fursich discloses the display of images captured by one or more imaging sensor for viewing by the driver, thus permitting the separate display of images captured one camera at a time).
Regarding claim 11, Fursich discloses a camera monitoring system (paragraph [65], fig.15B, Fursich discloses a camera monitoring system), comprising:
a first camera and a second camera that are each configured to record respective images of a side of a commercial vehicle (paragraph [81], Fursich discloses that the camera system for monitoring a vehicle can also be implemented with a trailer connected to a vehicle which is similar to Applicant's paragraph [39] on page 5 of Applicant's specification where Applicant discloses connecting a trailer to a commercial vehicle) and its surrounding environment (paragraph [65], fig.15B, Fursich discloses cameras 14g and 14h, wherein camera 14g is facing rearward from the driver (left) side to capture the field of view facing rearward from the driver (left) side and camera 14h is facing rearward from the passenger (right) side to capture field of view facing rearward from the passenger (right) side, and paragraph [54], Fursich discloses that wide or wider field of view cameras can be implemented for a side view camera to capture a wide field of view of the surrounding environment including the side view and the front and rear view in that fig.13 illustrates the implementation of a wide field of view cameras, and paragraph [58], fig.13, Fursich discloses side-mounted 180 degree field of view (or wide angle field of view) cameras that have angles faced downward and sideward and cameras facing rearwardly likes cameras 14g and 14h),
wherein the first camera and the second camera have different, overlapping fields of view (paragraph [65], fig.15B, Fursich discloses cameras 14g and 14h, wherein camera 14g is facing rearward from the driver (left) side to capture the field of view facing rearward from the driver (left) side as well as some side view, and camera 14h is facing rearward from the passenger (right) side to capture field of view facing rearward from the passenger (right) side as well as some side view, and that camera 14g is different from camera 14h), and have different respective optical axes that intersect a ground plane at different respective optical angles (paragraph [54], Fursich discloses that wide or wider field of view cameras can be implemented for a side view camera to capture a wide field of view of the surrounding environment including the side view and the front and rear view in that fig.13 illustrates the implementation of a wide field of view cameras, and paragraph [58], fig.13, Fursich discloses side-mounted 180 degree field of view (or wide angle field of view) cameras that have angles faced downward and sideward and cameras facing rearwardly likes cameras 14g and 14h, thus the cameras can have different respective optical axes that intersect a ground plane since cameras can be faced downward); and
processing circuitry operatively connected to memory (paragraph [37], Fursich discloses electronic control unit) and configured to:
obtain a first image from the first camera and a second image from the second camera (paragraph [65], fig.15B, Fursich discloses cameras 14g and 14h, wherein camera 14g is facing rearward from the driver (left) side to capture the field of view facing rearward from the driver (left) side as well as some side view, and camera 14h is facing rearward from the passenger (right) side to capture field of view facing rearward from the passenger (right) side as well as some side view);
perform a perspective transformation on at least one of the first image and second image to obtain an updated image set (paragraph [61], Fursich discloses a perspective transformation process takes place in that a newly combined image is formed by dewarping and stitching as illustrated in fig.14 for combining image portions obtained from plural cameras for obtaining an updated image set); and
horizontally stitch the images of the updated image set together to form a combined image (paragraph [54], Fursich discloses that wide or wider field of view cameras can be implemented for a side view camera to capture a wide field of view of the surrounding environment including the side view and the front and rear view in that fig.13 illustrates the implementation of a wide field of view cameras, and that the processing and stitching of images is performed as illustrated in fig.14, and paragraph [61], Fursich discloses that stitching of images is performed in fig.14, where the stitching line exists for permitting the stitching of images in a horizontal manner to form a combined image).
Fursich does not disclose perform a perspective transformation on at least one of the first image and second image to obtain an updated image set, such that at least one of the first image and second image are updated in the updated image set; and vertically stitch the images of the updated image set together to form a combined image. However, Forutanpour teaches perform a perspective transformation on at least one of the first image and second image to obtain an updated image set (paragraph [132], Forutanpour discloses GPU 18 blends the image content on bottom of first rectangular image with image content on top of the second rectangular image to generate a stitched rectangular image or a perspective transformation process, in that paragraph [220], Forutanpour discloses that updates can be made to process the image data obtained with the processes initiated by CPU and GPU to refresh image data by utilizing a circular addressing scheme in the buffer (memory) to update image data stored in buffer), such that at least one of the first image and second image are updated in the updated image set (paragraph [132], Forutanpour discloses GPU 18 blends the image content on bottom of first rectangular image with image content on top of the second rectangular image to generate a stitched rectangular image or a perspective transformation process, in that paragraph [220], Forutanpour discloses that updates can be made to process the image data obtained with the processes initiated by CPU and GPU to refresh image data by utilizing a circular addressing scheme in the buffer (memory) to update image data stored in buffer); and vertically stitch the images of the updated image set together to form a combined image (paragraph [132], Forutanpour discloses GPU 18 blends the image content on bottom of first rectangular image with image content on top of the second rectangular image to generate a stitched rectangular image or a perspective transformation process to form a combined rectangular image in a vertical manner, in that paragraph [220], Forutanpour discloses that updates can be made to process the image data obtained with the processes initiated by CPU and GPU to refresh image data by utilizing a circular addressing scheme in the buffer (memory) to update image data stored in buffer). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Fursich and Forutanpour together as a whole for ascertaining a better perspective view of the video image content obtained by cameras in order to permit the driver to drive and park more carefully.
Regarding claim 14, Fursich does not disclose wherein in the combined image, the first image is depicted above the second image. However, Forutanpour teaches wherein in the combined image, the first image is depicted above the second image (paragraph [132], Forutanpour discloses GPU 18 blends the image content on bottom of first rectangular image with image content on top of the second rectangular image to generate a stitched rectangular image at the boundary between the top and bottom images, thus, linking the first (above) rectangular image with second (below) rectangular image). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Fursich and Forutanpour together as a whole for ascertaining a better perspective view of the video image content obtained by cameras in order to permit the driver to drive and park more carefully.
Regarding claim 19, Fursich does not disclose wherein to vertically stitch the images of the updated image set, the processing circuitry is configured to: map a plurality of first points of the first image in the updated image set to a plurality of second points the second image in the updated image set; align the first image in the updated image set and the second image in the updated image set based on the mapping; and blend the first image in the updated image set and the second image in the updated image set to form the combined image.
However, Forutanpour teaches wherein to vertically stitch the images of the updated image set (paragraph [132], Forutanpour discloses GPU 18 blends the image content on bottom of first rectangular image with image content on top of the second rectangular image to generate a stitched rectangular image or a perspective transformation process to form a combined rectangular image in a vertical manner, in that paragraph [220], Forutanpour discloses that updates can be made to process the image data obtained with the processes initiated by CPU and GPU to refresh image data by utilizing a circular addressing scheme in the buffer (memory) to update image data stored in buffer), the processing circuitry is configured to: map a plurality of first points of the first image in the updated image set to a plurality of second points the second image in the updated image set (paragraph [66], Forutanpour discloses that the mapping of points from the first image in the updated image is set with the points from the second image for overlaying the points of the image content from both first and second images together for properly aligning the first and second image together); align the first image in the updated image set and the second image in the updated image set based on the mapping (paragraph [66], Forutanpour discloses that the mapping of points from the first image in the updated image is set with the points from the second image for overlaying the points of the image content from both first and second images together for properly aligning the first and second image together); and blend the first image in the updated image set and the second image in the updated image set to form the combined image (paragraph [69], Forutanpour discloses that GPU can blend along the borders of the two rectangular images to stitch the two images together, wherein paragraph [132], Forutanpour discloses GPU 18 blends the image content on bottom of first rectangular image with image content on top of the second rectangular image to generate a stitched rectangular image or a perspective transformation process to form a combined rectangular image in a vertical manner, in that paragraph [220], Forutanpour discloses that updates can be made to process the image data obtained with the processes initiated by CPU and GPU to refresh image data by utilizing a circular addressing scheme in the buffer (memory) to update image data stored in buffer). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Fursich and Forutanpour together as a whole for ascertaining a better perspective view of the video image content obtained by cameras in order to permit the driver to drive and park more carefully.
Regarding claim 20, Fursich discloses wherein the processing circuitry is configured to: display the combined image on an electronic display in the commercial vehicle in a first mode (paragraph [75], Fursich discloses that the combined image can be displayed, wherein paragraph [61], Fursich discloses that stitching of images is performed in fig.14, where the stitching line exists for permitting the stitching of images in a horizontal manner to form a combined image; paragraph [81], Fursich discloses that the camera system for monitoring a vehicle can also be implemented with a trailer connected to a vehicle which is similar to Applicant's paragraph [39] on page 5 of Applicant's specification where Applicant discloses connecting a trailer to a commercial vehicle); display at least one of the first image and the second image separately without combination in a second mode (paragraph [75], Fursich discloses the display of image data in a selective manner for viewing an image captured by a camera or some of the cameras, and paragraph [82], Fursich discloses the display of images captured by one or more imaging sensor for viewing by the driver, thus permitting the separate display of images captured one camera at a time); and toggle between the first mode and the second mode in response to receipt of a toggle command from an occupant of the commercial vehicle (paragraph [75], Fursich discloses the display of image data in a selective manner for viewing an image captured by a camera or some of the cameras or all of the cameras, wherein paragraph [61], Fursich discloses that stitching of images is performed in fig.14, where the stitching line exists for permitting the stitching of images in a horizontal manner to form a combined image, and paragraph [82], Fursich discloses the display of images captured by one or more imaging sensor for viewing by the driver, thus permitting the separate display of images captured one camera at a time).
Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Fursich (US 2019/0080185) and Forutanpour (US 2018/0357804) in view of Hong (US 2022/0203894).
Regarding claim 2, Fursich and Forutanpour do not disclose wherein: the different respective optical angles comprise a first optical angle of the first camera and a second optical angle of the second camera; the first optical angle is less than 90°; and the second optical angle is less than or equal to 90° and is greater than the first optical angle.
However, Hong teaches wherein: the different respective optical angles comprise a first optical angle of the first camera and a second optical angle of the second camera (paragraph [65], Hong discloses one example where the first camera module can have optical angle at 33 degrees, and second camera module at 45 degrees for a parking mode, however, Hong discloses that there can be numerous, unlimited detection angles set depending on the positioning of the vehicle paragraph [67], Hong discloses one example where the first camera module can have optical angle at 39 degrees, and second camera module at 51 degrees for a driving mode, however, Hong discloses that there can be numerous, unlimited detection angles set depending on the positioning of the vehicle); the first optical angle is less than 90° (paragraph [65], Hong discloses one example where the first camera module can have optical angle at 33 degrees, and second camera module at 45 degrees for a parking mode, however, Hong discloses that there can be numerous, unlimited detection angles set depending on the positioning of the vehicle paragraph [67], Hong discloses one example where the first camera module can have optical angle at 39 degrees, and second camera module at 51 degrees for a driving mode, however, Hong discloses that there can be numerous, unlimited detection angles set depending on the positioning of the vehicle); and the second optical angle is less than or equal to 90° and is greater than the first optical angle (paragraph [65], Hong discloses one example where the first camera module can have optical angle at 33 degrees, and second camera module at 45 degrees for a parking mode, however, Hong discloses that there can be numerous, unlimited detection angles set depending on the positioning of the vehicle paragraph [67], Hong discloses one example where the first camera module can have optical angle at 39 degrees, and second camera module at 51 degrees for a driving mode, however, Hong discloses that there can be numerous, unlimited detection angles set depending on the positioning of the vehicle). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Fursich, Forutanpour and Hong together as a whole for capturing the appropriate perspective views of the cameras in order to drive and park safely when maneuvering the vehicle at any position.
Regarding claim 12, Fursich and Forutanpour do not disclose wherein: the different respective optical angles comprise a first optical angle of the first camera and a second optical angle of the second camera; the first optical angle is less than 90°; and the second optical angle is less than or equal to 90° and is greater than the first optical angle.
However, Hong teaches wherein: the different respective optical angles comprise a first optical angle of the first camera and a second optical angle of the second camera (paragraph [65], Hong discloses one example where the first camera module can have optical angle at 33 degrees, and second camera module at 45 degrees for a parking mode, however, Hong discloses that there can be numerous, unlimited detection angles set depending on the positioning of the vehicle paragraph [67], Hong discloses one example where the first camera module can have optical angle at 39 degrees, and second camera module at 51 degrees for a driving mode, however, Hong discloses that there can be numerous, unlimited detection angles set depending on the positioning of the vehicle); the first optical angle is less than 90° (paragraph [65], Hong discloses one example where the first camera module can have optical angle at 33 degrees, and second camera module at 45 degrees for a parking mode, however, Hong discloses that there can be numerous, unlimited detection angles set depending on the positioning of the vehicle paragraph [67], Hong discloses one example where the first camera module can have optical angle at 39 degrees, and second camera module at 51 degrees for a driving mode, however, Hong discloses that there can be numerous, unlimited detection angles set depending on the positioning of the vehicle); and the second optical angle is less than or equal to 90° and is greater than the first optical angle (paragraph [65], Hong discloses one example where the first camera module can have optical angle at 33 degrees, and second camera module at 45 degrees for a parking mode, however, Hong discloses that there can be numerous, unlimited detection angles set depending on the positioning of the vehicle paragraph [67], Hong discloses one example where the first camera module can have optical angle at 39 degrees, and second camera module at 51 degrees for a driving mode, however, Hong discloses that there can be numerous, unlimited detection angles set depending on the positioning of the vehicle). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Fursich, Forutanpour and Hong together as a whole for capturing the appropriate perspective views of the cameras in order to drive and park safely when maneuvering the vehicle at any position.
Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Fursich (US 2019/0080185), Forutanpour (US 2018/0357804) and Hong (US 2022/0203894) in view of Stein (US 2014/0372020).
Regarding claim 3, Fursich, Forutanpour and Hong do not disclose wherein the first camera has a first focal length, and the second camera has a second focal length that is less than the first focal length. However, Stein teaches wherein the first camera has a first focal length (paragraph [36], Stein discloses the first camera (eg. camera 122) has a different focal length than second camera (eg. camera 124), wherein paragraph [42], Stein discloses that focal lengths are different dependent on the camera, for instance, if first camera (eg. camera 122) focuses on a distant object, then the focal length of the first camera would be long, and if second camera (eg. camera 124) focuses on a nearby object, then the focal length of the second camera would be short, thus, the focal length of the second camera is shorter (ie. less than) the focal length of the first camera), and the second camera has a second focal length that is less than the first focal length (paragraph [36], Stein discloses the first camera (eg. camera 122) has a different focal length than second camera (eg. camera 124), wherein paragraph [42], Stein discloses that focal lengths are different dependent on the camera, for instance, if first camera (eg. camera 122) focuses on a distant object, then the focal length of the first camera would be long, and if second camera (eg. camera 124) focuses on a nearby object, then the focal length of the second camera would be short, thus, the focal length of the second camera is shorter (ie. less than) the focal length of the first camera). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Fursich, Forutanpour, Hong and Stein together as a whole for capturing images at any distance in order to properly locate objects within a monitored scene.
Regarding claim 13, Fursich, Forutanpour and Hong do not disclose wherein the first camera has a first focal length, and the second camera has a second focal length that is less than the first focal length. However, Stein teaches wherein the first camera has a first focal length (paragraph [36], Stein discloses the first camera (eg. camera 122) has a different focal length than second camera (eg. camera 124), wherein paragraph [42], Stein discloses that focal lengths are different dependent on the camera, for instance, if first camera (eg. camera 122) focuses on a distant object, then the focal length of the first camera would be long, and if second camera (eg. camera 124) focuses on a nearby object, then the focal length of the second camera would be short, thus, the focal length of the second camera is shorter (ie. less than) the focal length of the first camera), and the second camera has a second focal length that is less than the first focal length (paragraph [36], Stein discloses the first camera (eg. camera 122) has a different focal length than second camera (eg. camera 124), wherein paragraph [42], Stein discloses that focal lengths are different dependent on the camera, for instance, if first camera (eg. camera 122) focuses on a distant object, then the focal length of the first camera would be long, and if second camera (eg. camera 124) focuses on a nearby object, then the focal length of the second camera would be short, thus, the focal length of the second camera is shorter (ie. less than) the focal length of the first camera). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Fursich, Forutanpour, Hong and Stein together as a whole for capturing images at any distance in order to properly locate objects within a monitored scene.
Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Fursich (US 2019/0080185) and Forutanpour (US 2018/0357804) in view of Song (US 2018/0089795).
Regarding claim 5, Fursich does not disclose such that both the first image and the second image are updated in the updated image set. However, Forutanpour teaches such that both the first image and the second image are updated in the updated image set (paragraph [132], Forutanpour discloses GPU 18 blends the image content on bottom of first rectangular image with image content on top of the second rectangular image to generate a stitched rectangular image or a perspective transformation process, in that paragraph [220], Forutanpour discloses that updates can be made to process the image data obtained with the processes initiated by CPU and GPU to refresh image data by utilizing a circular addressing scheme in the buffer (memory) to update image data stored in buffer). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Fursich and Forutanpour together as a whole for ascertaining a better perspective view of the video image content obtained by cameras in order to permit the driver to drive and park more carefully.
Fursich and Forutanpour do not disclose wherein said performing a perspective transformation is performed for both the first image and the second image. However, Song teaches wherein performing a perspective transformation is performed for both the first image and the second image (paragraph [94], Song discloses that first and second image are both dewarped for producing a perspective transformation on both the first and second images). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Fursich, Forutanpour and Song together as a whole for permitting the images to be properly processed prior to combining the images in order to produce a more clearer combined image so as to output higher quality images.
Regarding claim 15, Fursich does not disclose such that both the first image and the second image are updated in the updated image set. However, Forutanpour teaches such that both the first image and the second image are updated in the updated image set (paragraph [132], Forutanpour discloses GPU 18 blends the image content on bottom of first rectangular image with image content on top of the second rectangular image to generate a stitched rectangular image or a perspective transformation process, in that paragraph [220], Forutanpour discloses that updates can be made to process the image data obtained with the processes initiated by CPU and GPU to refresh image data by utilizing a circular addressing scheme in the buffer (memory) to update image data stored in buffer). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Fursich and Forutanpour together as a whole for ascertaining a better perspective view of the video image content obtained by cameras in order to permit the driver to drive and park more carefully.
Fursich and Forutanpour do not disclose wherein the processing circuitry is configured to perform a perspective transformation is performed for both the first image and the second image. However, Song teaches wherein the processing circuitry is configured to perform a perspective transformation is performed for both the first image and the second image (paragraph [94], Song discloses a processor 130 that processes the first and second image to be dewarped for producing a perspective transformation on both the first and second images). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Fursich, Forutanpour and Song together as a whole for permitting the images to be properly processed prior to combining the images in order to produce a more clearer combined image so as to output higher quality images.
Claims 6-7 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Fursich (US 2019/0080185) and Forutanpour (US 2018/0357804) in view of Ota (US 2022/0415056).
Regarding claim 6, Fursich discloses performing processing prior to the performing the perspective transformation (paragraph [65], fig.15B, Fursich discloses cameras 14g and 14h, wherein camera 14g is facing rearward from the driver (left) side to capture the field of view facing rearward from the driver (left) side as well as some side view, and camera 14h is facing rearward from the passenger (right) side to capture field of view facing rearward from the passenger (right) side as well as some side view, wherein images are gathered prior to performing a perspective transformation, wherein paragraph [61], Fursich discloses a perspective transformation process takes place in that a newly combined image is formed by dewarping and stitching as illustrated in fig.14 for combining image portions obtained from plural cameras for obtaining an updated image set).
Fursich and Forutanpour does not disclose prior to said performing the perspective transformation, performing at least one of: a distortion correction for the first image from the first camera to mitigate image distortion caused by a lens or sensor of the first camera; and a distortion correction for the second image from the second camera to mitigate image distortion caused by a lens or sensor of the second camera. However, Ota teaches performing at least one of: a distortion correction for the first image from the first camera to mitigate image distortion caused by a lens or sensor of the first camera (paragraph [40], Chen discloses performing distortion correction to first camera and second camera for mitigating imperfections caused by camera lenses and other artifacts, wherein paragraph [41], Ota discloses first and second cameras have lenses that require distortion correction when processing images); and a distortion correction for the second image from the second camera to mitigate image distortion caused by a lens or sensor of the second camera (paragraph [40], Ota discloses performing distortion correction to first camera and second camera for mitigating imperfections caused by camera lenses and other artifacts, wherein paragraph [41], Ota discloses first and second cameras have lenses that require distortion correction when processing images).
Since Fursich discloses “performing processing prior to the performing the perspective transformation”, and Ota discloses “performing at least one of: a distortion correction for the first image from the first camera to mitigate image distortion caused by a lens or sensor of the first camera; and a distortion correction for the second image from the second camera to mitigate image distortion caused by a lens or sensor of the second camera”, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Fursich, Forutanpour and Ota together as a whole for ascertaining the limitation “…prior to said performing the perspective transformation, performing at least one of: a distortion correction for the first image from the first camera to mitigate image distortion caused by a lens or sensor of the first camera; and a distortion correction for the second image from the second camera to mitigate image distortion caused by a lens or sensor of the second camera” so as to produce a clearer processed image at the output for viewing by eliminating artifacts.
Regarding claim 7, Fursich discloses performing processing prior to the performing the perspective transformation (paragraph [65], fig.15B, Fursich discloses cameras 14g and 14h, wherein camera 14g is facing rearward from the driver (left) side to capture the field of view facing rearward from the driver (left) side as well as some side view, and camera 14h is facing rearward from the passenger (right) side to capture field of view facing rearward from the passenger (right) side as well as some side view, wherein images are gathered prior to performing a perspective transformation, wherein paragraph [61], Fursich discloses a perspective transformation process takes place in that a newly combined image is formed by dewarping and stitching as illustrated in fig.14 for combining image portions obtained from plural cameras for obtaining an updated image set).
Fursich and Forutanpour does not disclose prior to said performing the perspective transformation, performing both of: a distortion correction for the first image from the first camera to mitigate image distortion caused by a lens or sensor of the first camera; and a distortion correction for the second image from the second camera to mitigate image distortion caused by a lens or sensor of the second camera. However, Ota teaches performing both of: a distortion correction for the first image from the first camera to mitigate image distortion caused by a lens or sensor of the first camera (paragraph [40], Ota discloses performing distortion correction to first camera and second camera for mitigating imperfections caused by camera lenses and other artifacts, wherein paragraph [41], Ota discloses first and second cameras have lenses that require distortion correction when processing images); and a distortion correction for the second image from the second camera to mitigate image distortion caused by a lens or sensor of the second camera (paragraph [40], Ota discloses performing distortion correction to first camera and second camera for mitigating imperfections caused by camera lenses and other artifacts, wherein paragraph [41], Ota discloses first and second cameras have lenses that require distortion correction when processing images).
Since Fursich discloses “performing processing prior to the performing the perspective transformation”, and Ota discloses “performing both of: a distortion correction for the first image from the first camera to mitigate image distortion caused by a lens or sensor of the first camera; and a distortion correction for the second image from the second camera to mitigate image distortion caused by a lens or sensor of the second camera”, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Fursich, Forutanpour and Ota together as a whole for ascertaining the limitation “…prior to said performing the perspective transformation, performing both of: a distortion correction for the first image from the first camera to mitigate image distortion caused by a lens or sensor of the first camera; and a distortion correction for the second image from the second camera to mitigate image distortion caused by a lens or sensor of the second camera” so as to produce a clearer processed image at the output for viewing by eliminating artifacts.
Regarding claim 16, Fursich discloses wherein the processing circuitry is configured to perform processing prior to performance of the perspective transformation (paragraph [65], fig.15B, Fursich discloses cameras 14g and 14h, wherein camera 14g is facing rearward from the driver (left) side to capture the field of view facing rearward from the driver (left) side as well as some side view, and camera 14h is facing rearward from the passenger (right) side to capture field of view facing rearward from the passenger (right) side as well as some side view, wherein images are gathered prior to performing a perspective transformation, wherein paragraph [61], Fursich discloses a perspective transformation process takes place in that a newly combined image is formed by dewarping and stitching as illustrated in fig.14 for combining image portions obtained from plural cameras for obtaining an updated image set).
Fursich and Forutanpour does not disclose prior to performance of the perspective transformation, perform at least one of: a distortion correction for the first image from the first camera to mitigate image distortion caused by a lens or sensor of the first camera; and a distortion correction for the second image from the second camera to mitigate image distortion caused by a lens or sensor of the second camera. However, Ota teaches perform at least one of: a distortion correction for the first image from the first camera to mitigate image distortion caused by a lens or sensor of the first camera (paragraph [40], Chen discloses performing distortion correction to first camera and second camera for mitigating imperfections caused by camera lenses and other artifacts, wherein paragraph [41], Ota discloses first and second cameras have lenses that require distortion correction when processing images); and a distortion correction for the second image from the second camera to mitigate image distortion caused by a lens or sensor of the second camera (paragraph [40], Ota discloses performing distortion correction to first camera and second camera for mitigating imperfections caused by camera lenses and other artifacts, wherein paragraph [41], Ota discloses first and second cameras have lenses that require distortion correction when processing images).
Since Fursich discloses “perform processing prior to performance of the perspective transformation”, and Ota discloses “perform at least one of: a distortion correction for the first image from the first camera to mitigate image distortion caused by a lens or sensor of the first camera; and a distortion correction for the second image from the second camera to mitigate image distortion caused by a lens or sensor of the second camera”, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Fursich, Forutanpour and Ota together as a whole for ascertaining the limitation “…prior to performance of the perspective transformation, perform at least one of: a distortion correction for the first image from the first camera to mitigate image distortion caused by a lens or sensor of the first camera; and a distortion correction for the second image from the second camera to mitigate image distortion caused by a lens or sensor of the second camera” so as to produce a clearer processed image at the output for viewing by eliminating artifacts.
Regarding claim 17, Fursich discloses wherein the processing circuitry is configured to, prior to performance of the perspective transformation (paragraph [65], fig.15B, Fursich discloses cameras 14g and 14h, wherein camera 14g is facing rearward from the driver (left) side to capture the field of view facing rearward from the driver (left) side as well as some side view, and camera 14h is facing rearward from the passenger (right) side to capture field of view facing rearward from the passenger (right) side as well as some side view, wherein images are gathered prior to performing a perspective transformation, wherein paragraph [61], Fursich discloses a perspective transformation process takes place in that a newly combined image is formed by dewarping and stitching as illustrated in fig.14 for combining image portions obtained from plural cameras for obtaining an updated image set).
Fursich and Forutanpour does not disclose prior to performance of the perspective transformation, perform both of: a distortion correction for the first image from the first camera to mitigate image distortion caused by a lens or sensor of the first camera; and a distortion correction for the second image from the second camera to mitigate image distortion caused by a lens or sensor of the second camera. However, Ota teaches perform both of: a distortion correction for the first image from the first camera to mitigate image distortion caused by a lens or sensor of the first camera (paragraph [40], Ota discloses performing distortion correction to first camera and second camera for mitigating imperfections caused by camera lenses and other artifacts, wherein paragraph [41], Ota discloses first and second cameras have lenses that require distortion correction when processing images); and a distortion correction for the second image from the second camera to mitigate image distortion caused by a lens or sensor of the second camera (paragraph [40], Ota discloses performing distortion correction to first camera and second camera for mitigating imperfections caused by camera lenses and other artifacts, wherein paragraph [41], Ota discloses first and second cameras have lenses that require distortion correction when processing images).
Since Fursich discloses “performing processing prior to performance of the perspective transformation”, and Ota discloses “performing both of: a distortion correction for the first image from the first camera to mitigate image distortion caused by a lens or sensor of the first camera; and a distortion correction for the second image from the second camera to mitigate image distortion caused by a lens or sensor of the second camera”, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Fursich, Forutanpour and Ota together as a whole for ascertaining the limitation “…prior to performance of the perspective transformation, perform both of: a distortion correction for the first image from the first camera to mitigate image distortion caused by a lens or sensor of the first camera; and a distortion correction for the second image from the second camera to mitigate image distortion caused by a lens or sensor of the second camera” so as to produce a clearer processed image at the output for viewing by eliminating artifacts.
Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Fursich (US 2019/0080185) and Forutanpour (US 2018/0357804) in view of Moore (US 2020/0257917).
Regarding claim 8, Fursich does not disclose vertical stitching. However, Forutanpour teaches vertical stitching (paragraph [132], Forutanpour discloses GPU 18 blends the image content on bottom of first rectangular image with image content on top of the second rectangular image to generate a stitched rectangular image or a perspective transformation process to form a combined rectangular image in a vertical manner). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Fursich and Forutanpour together as a whole for ascertaining a better perspective view of the video image content obtained by cameras in order to permit the driver to drive and park more carefully.
Fursich and Forutanpour do not disclose comprising prior to said vertically stitching: performing at least one of cropping and zooming at least one of the first image and the second image. However, Moore teaches prior to stitching: performing at least one of cropping and zooming at least one of the first image and the second image (paragraph [28], Moore discloses that the first, second and third images can be cropped before the stitching process takes place for producing a combined image).
Since Forutanpour discloses vertical stitching, and Moore discloses “…prior to stitching: performing at least one of cropping and zooming at least one of the first image and the second image”, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Fursich, Forutanpour and Moore together as a whole for ascertaining the limitation “…prior to said vertically stitching: performing at least one of cropping and zooming at least one of the first image and the second image” in order to crop images so as to permit the images to be processed properly prior to stitching so as to produce high quality images.
Regarding claim 18, Fursich does not disclose vertical stitching. However, Forutanpour teaches vertical stitching (paragraph [132], Forutanpour discloses GPU 18 blends the image content on bottom of first rectangular image with image content on top of the second rectangular image to generate a stitched rectangular image or a perspective transformation process to form a combined rectangular image in a vertical manner). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Fursich and Forutanpour together as a whole for ascertaining a better perspective view of the video image content obtained by cameras in order to permit the driver to drive and park more carefully.
Fursich and Forutanpour do not disclose comprising prior to the vertical stitching: perform at least one of a crop and a zoom of at least one of the first image and the second image. However, Moore teaches prior to the stitching: perform at least one of a crop and a zoom of at least one of the first image and the second image (paragraph [28], Moore discloses that the first, second and third images can be cropped before the stitching process takes place for producing a combined image).
Since Forutanpour discloses vertical stitching, and Moore discloses “…prior to the stitching: perform at least one of a crop and a zoom of at least one of the first image and the second image”, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Fursich, Forutanpour and Moore together as a whole for ascertaining the limitation “…prior to the vertical stitching: perform at least one of a crop and a zoom of at least one of the first image and the second image.” in order to crop images so as to permit the images to be processed properly prior to stitching so as to produce high quality images.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALLEN C WONG whose telephone number is (571)272-7341. The examiner can normally be reached on Flex Monday-Thursday 9:30am-7:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath V Perungavoor can be reached on 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALLEN C WONG/Primary Examiner, Art Unit 2488