DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The Amendment filed on 12/5/2025 has been entered. Claims 1 and 3 have been canceled, claims 2, 4-29 remain pending in the application. Applicant’s amendments to the claims have overcome previous objection.
Priority
Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 2, 4, 8-15 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Colmenares U.S. Patent 10949986 in view of Tay U.S. Patent Application 20190311546, and further in view of Hoy U.S. Patent Application 20170316259.
Regarding claim 2, Colmenares discloses a system for generating three-dimensional information of a scene comprising:
a plurality of cameras, the cameras configured to be positioned to view the scene, and the plurality of cameras configured to generate data representative of at least two images taken at different positions relative to the scene (col. 4 line 41-44: one or more cameras 119 (e.g., a pair of the cameras 119) configured to detect the structured light projected onto the scene 108 by the projector 118 to estimate a depth of a surface in the scene 108; col. 10 line 15-21: FIG. 6… a patient 665 is positioned at least partially within the scene 108 below the camera array 110); and
the plurality of cameras configured to transmit data associated at least in part with the at least two images to one or more computer systems; and the one or more computer systems configured to: obtain the transmitted associated data from the at least two images (col. 7 line 7-13: The processing device 102 can process received inputs from the input controller 106 and process the captured images from the camera array 110 to generate output images; see fig. 1; col. 4 line 56-59: the processing device 102 includes an image processing device 107 (e.g., an image processor, an image processing module, an image processing unit, etc.) and a tracking processing device 109); and
extract at least a portion of the associated data (col. 12 line 53-55: the processing device 102 can extract feature points from a ChArUco target and process the feature points with the OpenCV camera calibration routine).
Colmenares discloses all the features with respect to claim 2 as outlined above. However, Colmenares fails to disclose pixel data, and use the at least a portion of the associated pixel data to generate a representation of a 3D neighbourhood that is representative of at least a portion of the scene based at least in part on a projection of the 3D neighbourhood in at least one of the images; and use the at least a portion of the associated pixel data to determine a likelihood one or more physical surfaces in the scene intersects the 3D neighbourhood, wherein at least one camera of the plurality of cameras is associated with a mobile platform or vehicle, and at least one other camera of the plurality of cameras is associated with a stationary platform or a separate vehicle.
Tay discloses pixel data (paragraph [0100]: 2D color pixels depicting the dynamic object in the concurrent 2D color image),
use the at least a portion of the associated pixel data to generate a representation of a 3D neighbourhood that is representative of at least a portion of the scene based at least in part on a projection of the 3D neighbourhood in at least one of the images (paragraph [0100]: augment a cluster of points representing a dynamic object in the current 3D point cloud with synthetic color pixels generated from 2D color pixels depicting the dynamic object in the concurrent 2D color image, thereby increasing density of points depicting this dynamic object in the augmented 3D point cloud; paragraph [0123]: isolate a cluster of color pixels in the concurrent color image that correspond to a cluster of points representing this particular object in the 3D point cloud; augment this cluster of points with synthetic 3D color points generated based on this cluster of color pixels); and
use the at least a portion of the associated pixel data to determine a likelihood one or more physical surfaces in the scene intersects the 3D neighbourhood (paragraph [0060]: porting 2D color pixels from the 2D color image onto a 2D (i.e., planar) manifold that (approximately) intersects this cluster of points in the 3D point cloud, thereby preserving 2D color pixel data within a 2D domain; paragraph [0075]: calculate planes that approximately intersect clusters of points that represent faces of other dynamic objects in the field).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Colmenares’ to display color pixels as taught by Tay, to render 2D and 3D data within a 3D image.
Colmenares as modified by Tay discloses all the features with respect to claim 2 as outlined above. However, Colmenares as modified by Tay fails to disclose at least one camera of the plurality of cameras is associated with a mobile platform or vehicle, and at least one other camera of the plurality of cameras is associated with a stationary platform or a separate vehicle.
Hoy discloses at least one camera of the plurality of cameras is associated with a mobile platform or vehicle, and at least one other camera of the plurality of cameras is associated with a stationary platform or a separate vehicle (paragraph [0041]: the system uses a plurality of monitoring devices, including both stationary devices such as surveillance cameras and microphones 301, as well as registered mobile devices such as mobile phone cameras 305 or cameras of a wearable device 303 (e.g., Google Glass)).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Colmenares and Tay’s to use both mobile and stationary cameras as taught by Hoy, to detect anomaly efficiently.
Regarding claim 4, Colmenares as modified by Tay and Hoy discloses the system of claim 2, wherein the at least a portion of the associated pixel data of the 3D neighbourhood includes one or more of the following: spectral data and spectral data characteristic of a substantive physical surface (Colmenares’ col. 4 line 31-33: the trackers 114 and the cameras 112 can have different spectral sensitives (e.g., infrared vs. visible wavelength)).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Colmenares’ to display color pixels as taught by Tay, to render 2D and 3D data within a 3D image; and combine Colmenares and Tay’s to use both mobile and stationary cameras as taught by Hoy, to detect anomaly efficiently.
Regarding claim 8, Colmenares as modified by Tay and Hoy discloses the system of claim 2, wherein the plurality of cameras are configured to generate pixel data representative of at least three images taken at different positions relative to the scene and the at least a portion of the associated pixel data is a subset of the pixel data determined by the projection of the 3D neighbourhood in at least one of the camera images (Colmenares’ col. 10 line 15-21: FIG. 6… a patient 665 is positioned at least partially within the scene 108 below the camera array 110. The surgical application can be a procedure to be carried out on a portion of interest of the patient; col. 2 line 36-40: The imaging system can then project the estimated 3D position into two-dimensional (2D) images from the cameras, and define a region of interest (ROI) in each of the images based on the projected position of the tool tip; col. 10 line 44-col 11 line 2: the cameras 112 each have a field of view 664 of the scene 108, and the trackers 114 each have a field of view 666 of the scene 108… the fields of view 666 of the trackers 114 can at least partially overlap one another (and/or the fields of view 664 of the cameras 112) to together define a tracking volume… the regions of overlap are tiled such that the resulting imaging volume covered by all the cameras 112 has a selected volume that exists as a subset of the volume covered by the trackers 114).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Colmenares’ to display color pixels as taught by Tay, to render 2D and 3D data within a 3D image; and combine Colmenares and Tay’s to use both mobile and stationary cameras as taught by Hoy, to detect anomaly efficiently.
Regarding claim 9, Colmenares as modified by Tay and Hoy discloses the system of claim 2, wherein the plurality of cameras are configured to generate pixel data representative of at least four images taken at different positions relative to the scene and the at least a portion of the associated pixel data is a subset of the pixel data determined by the projection of the 3D neighbourhood in at least one of the camera-images (Colmenares’ col. 10 line 15-21: FIG. 6… a patient 665 is positioned at least partially within the scene 108 below the camera array 110. The surgical application can be a procedure to be carried out on a portion of interest of the patient; col. 2 line 36-40: The imaging system can then project the estimated 3D position into two-dimensional (2D) images from the cameras, and define a region of interest (ROI) in each of the images based on the projected position of the tool tip; col. 10 line 44-col 11 line 2: the cameras 112 each have a field of view 664 of the scene 108, and the trackers 114 each have a field of view 666 of the scene 108… the fields of view 666 of the trackers 114 can at least partially overlap one another (and/or the fields of view 664 of the cameras 112) to together define a tracking volume… the regions of overlap are tiled such that the resulting imaging volume covered by all the cameras 112 has a selected volume that exists as a subset of the volume covered by the trackers 114).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Colmenares’ to display color pixels as taught by Tay, to render 2D and 3D data within a 3D image; and combine Colmenares and Tay’s to use both mobile and stationary cameras as taught by Hoy, to detect anomaly efficiently.
Regarding claim 10, Colmenares as modified by Tay and Hoy discloses the system of claim 9, wherein the at least four images or at least three images are taken at different positions relative to the scene within a relatively static time period (Tay’s paragraph [0054]: a first 2D color image recorded at a first time via a 2D color camera arranged on an autonomous vehicle in Block S210; accessing a first 3D point cloud recorded at approximately the first time via a 3D depth sensor arranged on the autonomous vehicle).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Colmenares’ to display color pixels as taught by Tay, to render 2D and 3D data within a 3D image; and combine Colmenares and Tay’s to use both mobile and stationary cameras as taught by Hoy, to detect anomaly efficiently.
Regarding claim 11, Colmenares as modified by Tay and Hoy discloses the system of claim 2, wherein the multiple 3D neighbourhoods in aggregate do not cover the entire scene (Colmenares’ col. 10 line 44-col 11 line 2: the cameras 112 each have a field of view 664 of the scene 108, and the trackers 114 each have a field of view 666 of the scene 108… the fields of view 666 of the trackers 114 can at least partially overlap one another (and/or the fields of view 664 of the cameras 112) to together define a tracking volume… the regions of overlap are tiled such that the resulting imaging volume covered by all the cameras 112 has a selected volume that exists as a subset of the volume covered by the trackers 114).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Colmenares’ to display color pixels as taught by Tay, to render 2D and 3D data within a 3D image; and combine Colmenares and Tay’s to use both mobile and stationary cameras as taught by Hoy, to detect anomaly efficiently.
Regarding claim 12, Colmenares as modified by Tay and Hoy discloses the system of claim 11, wherein the multiple 3D neighbourhoods are substantially centered or substantially aligned along at least one line projecting into the scene from at least one fixed 3D point relative to a 3D position of camera centers at a time, or times, a camera or cameras captured the images (Tay’s paragraph [0026]: the system can align any other horizontal reference point in the image plane with the center of the corresponding lane. For example, for a color camera plane of a left-forward camera, the system can align a pixel one-third of the width of the image plane from the right edge of the image plane with the center of the corresponding lane).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Colmenares’ to display color pixels as taught by Tay, to render 2D and 3D data within a 3D image; and combine Colmenares and Tay’s to use both mobile and stationary cameras as taught by Hoy, to detect anomaly efficiently.
Regarding claim 13, Colmenares as modified by Tay and Hoy discloses the system of claim 11, wherein data collected from within at least a portion of the multiple 3D neighbourhoods is used to determine a likelihood that the physical surface is at least partially contained within the multiple 3D neighbourhoods (Tay’s paragraph [0060]: porting 2D color pixels from the 2D color image onto a 2D (i.e., planar) manifold that (approximately) intersects this cluster of points in the 3D point cloud, thereby preserving 2D color pixel data within a 2D domain; paragraph [0075]: calculate planes that approximately intersect clusters of points that represent faces of other dynamic objects in the field).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Colmenares’ to display color pixels as taught by Tay, to render 2D and 3D data within a 3D image; and combine Colmenares and Tay’s to use both mobile and stationary cameras as taught by Hoy, to detect anomaly efficiently.
Regarding claim 14, Colmenares as modified by Tay and Hoy discloses the system of claim 13, wherein the portion of the multiple 3D neighbourhoods is representative of a line passing through the scene (Tay’s paragraph [0013]: For each camera on the autonomous vehicle, the system can define an image plane that is perpendicular to the ground plane... the system can set this offset distance such that an image plane defined for a 2D image feed recorded by a laterally-facing camera (e.g., a left-forward camera, a right-forward camera) on the autonomous vehicle is approximately centered on an adjacent road lane; see fig. 2).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Colmenares’ to display color pixels as taught by Tay, to render 2D and 3D data within a 3D image; and combine Colmenares and Tay’s to use both mobile and stationary cameras as taught by Hoy, to detect anomaly efficiently.
Regarding claim 15, Colmenares as modified by Tay and Hoy discloses the system of claim 14, wherein the line is straight, substantially straight, curved, continuous, discontinuous, substantially continuous, substantially discontinuous or combinations thereof and substantially follows contours of at least one physical surface in the scene (Tay’s paragraph [0013]: For each camera on the autonomous vehicle, the system can define an image plane that is perpendicular to the ground plane... the system can set this offset distance such that an image plane defined for a 2D image feed recorded by a laterally-facing camera (e.g., a left-forward camera, a right-forward camera) on the autonomous vehicle is approximately centered on an adjacent road lane; see fig. 2).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Colmenares’ to display color pixels as taught by Tay, to render 2D and 3D data within a 3D image; and combine Colmenares and Tay’s to use both mobile and stationary cameras as taught by Hoy, to detect anomaly efficiently.
Regarding claim 29, Colmenares as modified by Tay and Hoy discloses the system of claim 2, wherein the system is configured to generate three-dimensional information in real-time (Colmenares’ col. 7 line 7-13: The processing device 102 can process received inputs from the input controller 106 and process the captured images from the camera array 110 to generate output images corresponding to the virtual perspective in substantially real-time as perceived by a viewer of the display device 104 (e.g., at least as fast as the framerate of the camera array 110)).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Colmenares’ to display color pixels as taught by Tay, to render 2D and 3D data within a 3D image; and combine Colmenares and Tay’s to use both mobile and stationary cameras as taught by Hoy, to detect anomaly efficiently.
Claim 5-7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Colmenares U.S. Patent 10949986 in view of Tay U.S. Patent Application 20190311546, in view of Hoy U.S. Patent Application 20170316259, and further in view of Rowell U.S. Patent Application 20200342652.
Regarding claim 5, Colmenares as modified by Tay and Hoy discloses all the features with respect to claim 1 as outlined above. However, Colmenares as modified by Tay and Hoy fails to disclose at least a portion of the associated pixel data includes optical flow information.
Rowell discloses at least a portion of the associated pixel data includes optical flow information (paragraph [0086]: Optical flow information may be represented in an object direction map including direction vectors for every pixel in an image).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Colmenares, Tay and Hoy’s to include optical flow information as taught by Rowell, to provide sufficient data for performing tasks without the expense of complex hardware.
Regarding claim 6, Colmenares as modified by Tay, Hoy and Rowell discloses the system of claim 2, wherein the at least a portion of the associated pixel data includes pixel-level spectral data and/or pixel-level optical flow information derived from the projection of the 3D neighbourhood in at least one of the images (Tay’s paragraph [0123]: isolate a cluster of color pixels in the concurrent color image that correspond to a cluster of points representing this particular object in the 3D point cloud; Rowell’s paragraph [0086]: Optical flow information may be represented in an object direction map including direction vectors for every pixel in an image).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Colmenares, Tay and Hoy’s to include optical flow information as taught by Rowell, to provide sufficient data for performing tasks without the expense of complex hardware.
Regarding claim 7, Colmenares as modified by Tay, Hoy and Rowell discloses the system of claim 2, wherein the one or more computer systems is configured to use at least a substantial portion of the at least a portion of the associated pixel data to determine an estimated velocity for one or more physical surfaces in at least one of three potential dimensions of space relative to the plurality of cameras (Tay’s paragraph [0069]: derive object perception data from the 3D point cloud, such as including predictions of types and relative velocities of objects represented by points in the 3D point cloud; Rowell’s paragraph [0086]: The AIDRU 106 may also express optical flow information as an object velocity map including velocity direction vectors for every pixel in an image. Optical flow maps including object flow vectors assembled by combining direction and velocity vectors may also be generated by the AIDRU 106).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Colmenares, Tay and Hoy’s to include optical flow information as taught by Rowell, to provide sufficient data for performing tasks without the expense of complex hardware.
Regarding claim 17, Colmenares as modified by Tay, Hoy and Rowell discloses the system of claim 2, wherein already calculated likelihood calculations within a cost matrix are used at least in part for defining subsequent cost matrices whose columns are substantially aligned with at least one other line across at least one image (Tay’s paragraph [0060]: porting 2D color pixels from the 2D color image onto a 2D (i.e., planar) manifold that (approximately) intersects this cluster of points in the 3D point cloud, thereby preserving 2D color pixel data within a 2D domain; paragraph [0075]: calculate planes that approximately intersect clusters of points that represent faces of other dynamic objects in the field; Rowell’s paragraph [0150]: Rectification matrices 636 are generated using the rotation matrix 634 and translation vector 635, wherein the rotation matrix 634 describes a rotational correction aligning images captured by a virtual stereo camera device so that the image planes of the left and right image channels are on the same plane).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Colmenares, Tay and Hoy’s to include optical flow information as taught by Rowell, to provide sufficient data for performing tasks without the expense of complex hardware.
Claim 16 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Colmenares U.S. Patent 10949986 in view of Tay U.S. Patent Application 20190311546, in view of Hoy U.S. Patent Application 20170316259, and further in view of Simon U.S. Patent Application 20120257792.
Regarding claim 16, Colmenares as modified by Tay and Hoy discloses all the features with respect to claim 14 as outlined above. However, Colmenares as modified by Tay and Hoy fails to disclose the line has a string like or ribbon like shape and substantially follows contours of at least one physical surface in the scene.
Simon discloses the line has a string like or ribbon like shape and substantially follows contours of at least one physical surface in the scene (paragraph [0246]: Two successive images are mapped together by using homologous primitives (representing the same details of the scene) belonging to the overlaps of the images... a (one-off) intersection of a road portion (linear) or the contour of a field (surface)).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Colmenares, Tay and Hoy’s to use homologous primitives as taught by Simon, to geo-reference an area.
Regarding claim 18, Colmenares as modified by Tay, Hoy and Simon discloses the system of claim 2, wherein likelihood calculations within a portion of 3D neighbourhoods produce numeric results that are independent of an order in which at least a portion of the data from intersection points derived from a set of images is processed (Simon’s paragraph [0246]: Two successive images are mapped together by using homologous primitives (representing the same details of the scene) belonging to the overlaps of the images... a (one-off) intersection of a road portion (linear) or the contour of a field (surface) . These homologous primitives are independent of the range-found points; Tay’s paragraph [0060]: porting 2D color pixels from the 2D color image onto a 2D (i.e., planar) manifold that (approximately) intersects this cluster of points in the 3D point cloud, thereby preserving 2D color pixel data within a 2D domain; paragraph [0075]: calculate planes that approximately intersect clusters of points that represent faces of other dynamic objects in the field).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Colmenares, Tay and Hoy’s to use homologous primitives as taught by Simon, to geo-reference an area.
Claim 19-21 are rejected under 35 U.S.C. 103 as being unpatentable over Colmenares U.S. Patent 10949986 in view of Tay U.S. Patent Application 20190311546, in view of Hoy U.S. Patent Application 20170316259, and further in view of Aratani U.S. Patent Application 20200104969.
Regarding claim 19, Colmenares as modified by Tay and Hoy discloses all the features with respect to claim 2 as outlined above. However, Colmenares as modified by Tay and Hoy fails to disclose an optimization calculation is repeated for a plurality of lines derived from selected images.
Aratani discloses an optimization calculation is repeated for a plurality of lines derived from selected images (paragraph [0007]: The feature point detected from each key frame is searched on an epipolar line and associated with each other… a map is calculated with high accuracy by nonlinear optimization calculation).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Colmenares, Tay and Hoy’s to perform optimization calculation as taught by Aratani, to perform estimation with high accuracy.
Regarding claim 20, Colmenares as modified by Tay, Hoy and Aratani discloses the system of claim 19, wherein the plurality of lines is selected from epipolar lines (Aratani’s paragraph [0007]: The feature point detected from each key frame is searched on an epipolar line and associated with each other… a map is calculated with high accuracy by nonlinear optimization calculation).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Colmenares, Tay and Hoy’s to perform optimization calculation as taught by Aratani, to perform estimation with high accuracy.
Regarding claim 21, Colmenares as modified by Tay, Hoy and Aratani discloses the system of claim 19, wherein a portion of the plurality of lines is selected from epipolar lines (Aratani’s paragraph [0007]: The feature point detected from each key frame is searched on an epipolar line and associated with each other… a map is calculated with high accuracy by nonlinear optimization calculation).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Colmenares, Tay and Hoy’s to perform optimization calculation as taught by Aratani, to perform estimation with high accuracy.
Claim 27 is rejected under 35 U.S.C. 103 as being unpatentable over Colmenares U.S. Patent 10949986 in view of Tay U.S. Patent Application 20190311546, in view of Hoy U.S. Patent Application 20170316259, and further in view of Dai U.S. Patent Application 20190012795.
Regarding claim 27, Colmenares as modified by Tay and Hoy discloses all the features with respect to claim 2 as outlined above. However, Colmenares as modified by Tay and Hoy fails to disclose the plurality of cameras are not arranged so that their camera centers are substantially coplanar.
Dai discloses the plurality of cameras are not arranged so that their camera centers are substantially coplanar (paragraph [0036]: FIG. 4A, points x and x′, point X, and camera centers C and C′ are coplanar).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Colmenares, Tay and Hoy’s to align camera centers as taught by Dai, to identify and track object using multiple cameras.
Claim 28 is rejected under 35 U.S.C. 103 as being unpatentable over Colmenares U.S. Patent 10949986 in view of Tay U.S. Patent Application 20190311546, in view of Hoy U.S. Patent Application 20170316259, and further in view of Roy U.S. Patent 6011863.
Regarding claim 28, Colmenares as modified by Tay and Hoy discloses all the features with respect to claim 2 as outlined above. However, Colmenares as modified by Tay and Hoy fails to disclose the plurality of cameras are not arranged so that their camera centers are substantially colinear.
Roy discloses the plurality of cameras are not arranged so that their camera centers are substantially colinear (col. 5 line 8-13: By using the line joining the camera's optical centers as the cylinder axis (see FIG. 1), all straight lines on the cylindrical surface are necessarily parallel to the cylinder axis and focus of expansion, making them suitable to be used as epipolar lines).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Colmenares, Tay and Hoy’s to align camera centers as taught by Roy, to form a stereoscopic image which finds a set of epipolar lines that cover the stereoscopic image while introducing a minimum of distortion.
Allowable Subject Matter
Claim 22-26 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Claim 22 is about the data associated with intersection points that are input into the likelihood calculations for the 3D neighbourhood that are associated with 3D scene information substantially aligned on at least one reference surface is calculated from the associated pixel data extracted from at least two rectified images separated by a pixel offset.
Colmenares 10949986, Tay 20190311546, Hoy 20170316259, Rowell 20200342652 and Benear 20030025778 combined cannot teach these features perfectly. These limitations when read in light of the rest of the limitations in the claim and the claims to which it depends make the claim allowable subject matter.
Claim 23-26 depend on claim 22, are allowed based on same reason as claim 22.
Response to Arguments
Applicant's arguments filed 12/5/2025, page 7 - 8, with respect to the rejection(s) of claim(s) 2 under 103, have been fully considered and are moot upon a new ground(s) of rejection made under 35 U.S.C. 103 as being unpatentable over Colmenares U.S. Patent 10949986 in view of Tay U.S. Patent Application 20190311546, and further in view of Hoy U.S. Patent Application 20170316259, as outlined above.
Applicant argues on page 7-8 that "Nowhere in Colmenares is there any disclosure or suggestion of the claimed arrangement of mobile and stationary cameras or mobile cameras on separate vehicles. Accordingly, Colmenares does not teach or suggest the claimed invention. Tay fails to cure the shortcomings of Colmenares because Tay also fails to disclose or suggest the claimed arrangement of cameras, i.e., mobile and stationary cameras or mobile cameras on separate vehicles."
In reply, the rejection is based on Colmenares, Tay and Hoy combined. Hoy discloses at least one camera of the plurality of cameras is associated with a mobile platform or vehicle, and at least one other camera of the plurality of cameras is associated with a stationary platform or a separate vehicle (paragraph [0041]: the system uses a plurality of monitoring devices, including both stationary devices such as surveillance cameras and microphones 301, as well as registered mobile devices such as mobile phone cameras 305 or cameras of a wearable device 303 (e.g., Google Glass)).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Yi Yang whose telephone number is (571)272-9589. The examiner can normally be reached on Monday-Friday 9:00 AM-6:00 PM EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached on 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/YI YANG/
Primary Examiner, Art Unit 2616