DETAILED ACTIONNotice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Applicant Response to Official Action
The response filed on 9/24/2025 has been entered and made of record.
Acknowledgment
Claim 3, canceled on 9/24/2025, is acknowledged by the examiner.
Claim 21, added on 9/24/2025, is acknowledged by the examiner.
Claims 1, 14, and 20, amended on 9/24/2025, are acknowledged by the examiner.
Response to Arguments
Applicant’s arguments with respect to claims 1, 14, 20, and their dependent claims have been considered but they are moot in view of the new grounds of rejection necessitated by amendments initiated by the applicant. Examiner addresses the main arguments of the Applicant as below.
Regarding the drawing objection, the amendment filed on 9/24/2025 addresses the issue. As a result, the drawing objection is withdrawn.
Regarding the 35 U.S.C. 112(b) rejection, the amendment filed on 9/24/2025 addresses the issue. As a result, the 35 U.S.C. 112(b) rejection is withdrawn.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) ELEMENT IN CLAIM FOR A COMBINATION.—An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as "configured to" or "so that"; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation is a computer vision application in claim 16.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejection – 35 U.S.C. § 112
The following is a quotation of 35 U.S.C. 112(b):
(B) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of pre-AIA 35 U.S.C. 112, second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-21 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter, which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. The amended claims 1, 14, and 20 recite "the pairs of images with known landmarks". There is insufficient antecedent basis for this limitation in the claim. Therefore, claims 1, 14, 20, and their dependent claims are indefinite and are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph.
Claims 5 and 13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter, which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claims 5 and 13 recite "The method according to claim 3". However, claim 3 was canceled, hence it is not clear to reader on what claim that claims 5 and 13 is depended on. Therefore, claims 5 and 13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under pre-AIA 35 U.S.C. 103(a) are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a).
Claims 1-2, 4-6, 8, and 11-20 are rejected under 35 U.S.C. 103 as being unpatentable over Nakata (US Patent Application Publication 2025/0126372 A1), (“Nakata”), in view of Raskob et al. (US Patent 11,423,610 B2), (“Raskob”), in view of Shimizu et al. (US Patent 11,250,708 B2), (“Shimizu”).
Regarding claim 1, Nakata meets the claim limitations, as follows:
An image synchronization method (a pattern matching method) [Nakata: para. 0005] comprising: for pairs of images (pairs of images) [Nakata: para. 0115], iteratively performing steps ((a predetermined program) [Nakata: para. 0086]; (performs electrical matching) [Nakata: para. 0087]; (FIGS. 11 and 12 are used to illustrate a specific configuration example of a vehicle-mounted control device 1 that executes a three-dimensional information acquisition process described so far. In the following, it is assumed that n cameras (cameras C1 to Cn) are mounted on the vehicle 100 to monitor each direction around the vehicle such that image capturing areas of respective cameras overlap. Each camera is connected to the vehicle-mounted control device 1) [Nakata: para. 0087; Figs. 11-12]) comprising: comparing each of a set of images obtained from a first camera with an image of a second camera (comparing the detection positions of the object by a plurality
of camera images and another external sensor) [Nakata: para. 0061; Figs. 2-6]; and calculating a flatness score ((When using the stereo cameras, since the relationship between positions and postures (viewing direction) of the left and right cameras are known, the distance to the ranging target around the vehicle (three-dimensional information) can be calculated using the method of triangulation by obtaining the parallax by mapping the same target portions in the left and right images captured by the left and right cameras using, such as a pattern matching method) [Nakata: para. 0005] – Note: Paragraph [0011] of the original specification explains the flatness score as follow: “Further, a disparity between the pair of matched points can be used to calculate a distance that is associated with a 3D position for the pair of matched points and is treated as the flatness score”. As a result, Nakata discloses this limitation) that is indicative of an error aggregated from synchronization of the pairs of images with known landmarks (when at least one of the two monocular cameras is a camera that employs a rolling-shutter type image sensor such as a CMOS sensor (an image sensor that sequentially captures images by shifting exposure timing for each image capturing line on the light receiving surface), even when the start timings of the image capturing by the two monocular cameras are synchronized, the exposure timing of one monocular camera capturing an object from a certain position in a certain direction (that is, the vertical position of the image capturing line on the side of one monocular camera) and the exposure timing of the other monocular camera capturing the same object from another position in a different direction (that is, the vertical position of the image capturing line on the side of the other monocular camera) may differ. In this case, there is a problem that an error occurs in the distance measurement of the object due to the change in the relative positions of the vehicle (camera) and the ranging target that occurs during the period corresponding to the difference in the exposure timing (difference in the vertical positions of the image capturing lines of both the cameras)) [Nakata: para. 0008] among the pairs of images (pairs of images) [Nakata: para. 0115], selecting a pair of images associated with a lowest error ((Compared with the imaging method in FIG. 6, which simultaneously captures the left and right image capturing lines projecting one specific point (the point P) of the ranging target OB, in the imaging method of the modification illustrated in FIGS. 7 A to 7C, the deviation of the image capturing timing between the left and right image capturing lines including the point P may be larger. Thus, the ranging error is also somewhat larger. Compared with the case where the exposure timing relationship between the left and right cameras is always fixed (for example, always using the combination AB in FIG. 7B), it is possible to suppress the ranging error because the image capturing timings are controlled according to a ranging target region. In order to suppress the ranging error even when the region is divided into smaller regions, it is sufficient that the region is divided into more regions and the difference in the exposure timing between the left camera and the right camera for each region is defined. Furthermore, it is conceivable to define the region by overlapping the region, select the region that best covers a target region desired to be distance measured, and define the difference) [Nakata: para. 0069-0070 – Note: Nakata discloses a method how to select a pair of images with the least error]; (In FIGS. 10A and 10B, for each point in the left image Il3, the corresponding points are illustrated in the right image Ir3a and the right image Ir3b, and in the right image Ir3a and the right image Ir3b, the ranging error at each point is indicated by an "o" or "x" around the point. Here, the magnitudes of "o" or "x" indicate the degree of error, with "o" indicating that the error appears as a distance closer than it actually is and "x" indicating that the error appears as a distance farther than it actually is) [Nakata: para. 0081] – Note: Nakata further discloses marks "o" and "x", which indicate the ranges of error. It is clear from this features that a pair of images with the least error can be selected from “o”, in which the error distance is smaller); and identifying the images of the selected pair of images as being synchronized ((Recent vehicles are equipped with monocular cameras mounted at various positions and in various orientations on the vehicle to monitor each direction around the vehicle. Therefore, by combining captured images from any two monocular cameras with overlapping image capturing areas, it is possible to acquire three-dimensional information around the vehicle using the method of triangulation described above, even in directions not monitored by the stereo camera. To do this, it is sufficient that the image capturing timings of the two monocular cameras are synchronized such that the overlapping image capturing areas are captured simultaneously) [Nakata: para. 0007] ; (When selecting the images, the stereo view target image selection unit 12g uses the information related to selection of the ranging target OB obtained from the camera signal processing operation control unit 12a. Taking the later processing into account, the images used for the distance measurement are paired, and information on the posture of the camera that acquired each image and the posture of the paired camera is also added. A plurality of pairs of two cameras may be selected simultaneously, and the distance measurement process may be performed at a later stage for each combination) [Nakata: para. 0108] ; (In FIGS. 10A and 10B, for each point in the left image Il3, the corresponding points are illustrated in the right image Ir3a and the right image Ir3b, and in the right image Ir3a and the right image Ir3b, the ranging error at each point is indicated by an "o" or "x" around the point. Here, the magnitudes of "o" or "x" indicate the degree of error, with "o" indicating that the error appears as a distance closer than it actually is and "x" indicating that the error appears as a distance farther than it actually is) [Nakata: para. 0081]);
wherein the first camera is a vehicle-mounted camera (VC) of a vehicle (images captured by a pair of vehicle-mounted cameras) [Nakata: para. 0001] and the second camera (a second camera) [Nakata: para. 0010] is an infrastructure camera (IC) that is separate from the vehicle, the first and second cameras configured to provide different perspectives of a same environment (the other monocular camera capturing the same object from another position in a different direction (that is, the vertical position of the image capturing line on the side of the other monocular camera) may differ) [Nakata: para. 0008].
Nakata does not explicitly disclose the following claim limitations (Emphasis added).
for pairs of images, iteratively performing.
the second camera is an infrastructure camera (IC).
However, in the same field of endeavor Raskob further discloses the claim limitations as follows:
iteratively performing (The algorithm is a greedy iterative approach that performs an efficient global search) [Raskob: col. 9, line 33-35];
selecting the pair associated with the lowest error (select two images with a lowest Root Mean Square Error (RMSE), or similarly, a high matching quality. An RSME score may be determined for many possible image pairs, characteristics within the images) [Raskob: col. 6, line 61-65; Step 208 in Fig. 2]; and identifying the images associated with that pair (select two images with a lowest Root Mean Square Error (RMSE)) [Raskob: col. 6, line 61-62; Step 208 in Fig. 2] as being synchronized ((The point cloud images may be fused to generate the DSM) [Raskob: col. 3, line 19-21; Step 1206 in Fig. 12]; (the stereo pair point clouds are adjusted and fused to generate the Digital Surface Model (DSM) as described above. The stereo pair point clouds may be aligned to further reduce the error between the point clouds) [Raskob: col. 18, line 56-29; Step 1206 in Fig. 12]).
It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Nakata with Raskob to program the system to implement of Raskob’s method.
Therefore, the combination of Nakata with Raskob will enable the system to generate a complete and accurate geometrically optimized three-dimensional environment models [Raskob: col. 1, line 8-11, Abstract].
Nakata and Raskob do not explicitly disclose the following claim limitations (Emphasis added).
the second camera is an infrastructure camera (IC).
However, in the same field of endeavor Shimizu further discloses the claim limitations as follows:
the second camera is an infrastructure camera (IC) (A camera 521 is installed in the vicinity of the ceiling of the end, which is opposite to the door) [Shimizu: col. 17, line 26-27].
It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Nakata and Raskob with Shimizu to program the system to implement of Shimizu’s method.
Therefore, the combination of Nakata and Raskob with Shimizu will enable the system to detect a moving object around the vehicle [Shimizu: col. 1, line 53-67].
Regarding claim 2, Nakata meets the claim limitations as set forth in claim 1.
Nakata further meets the claim limitations as follow.
utilizing synchronized images (the image capturing by the two monocular cameras are synchronized) [Nakata: para. 0008] in a computer vision application (When using the stereo cameras, since the relationship between positions and postures (viewing direction) of the left and right cameras are known, the distance to the ranging target around the vehicle (three-dimensional information) can be calculated using the method of triangulation by obtaining the parallax by mapping the same target portions in the left and right images captured by the left and right cameras using, such as a pattern matching method) [Nakata: para. 0005] comprising a vehicle navigation process (Vehicles are equipped with monocular cameras mounted at various positions and in various orientations on the vehicle to monitor each direction around the vehicle. Therefore, by combining captured images from any two monocular cameras with overlapping image capturing areas, it is possible to acquire three-dimensional information around the vehicle using the method of triangulation described above, even in directions not monitored by the stereo camera. To do this, it is sufficient that the image capturing timings of the two monocular cameras are synchronized such that the overlapping image capturing areas are captured simultaneously) [Nakata: para. 0007].
Regarding claims 4 and 17, Nakata meets the claim limitations as set forth in claims 1 and 14. Nakata further meets the claim limitations as follow.
at a vehicle information system (a vehicle-mounted control device and a three-dimensional information acquisition method that acquire three-dimensional information around a vehicle based on a pair of still images captured by a pair of vehicle-mounted cameras.) [Nakata: para. 0001] communicatively coupled to a vehicle performing steps (a computer including hardware such as a CPU or another arithmetic unit, a storage device such as semiconductor memory, and a communication device. Then, the arithmetic unit executes a predetermined program to realize each function of the camera signal processing unit 12, the camera recognition processing unit 13, and the like) [Nakata: para. 0086] comprising at least one of: receiving at least one image of the set of images (images obtained from a single camera) [Nakata: para. 0090]; obtaining, from a server, global navigation satellite system (GNSS) information; using the GNSS information to determine a location of the vehicle; communicating the location to the server; receiving, from the server, a three-dimensional (3D) map that comprises an area surrounding the vehicle; or using the GNSS information to perform a rectification operation to compensate a perspective distortion in the at least one image.
Regarding claim 5, Nakata meets the claim limitations as set forth in claim 3. Nakata further meets the claim limitations as follow.
the 3D map comprises location information ((When using the stereo cameras, since the relationship between positions and postures (viewing direction) of the left and right cameras are known, the distance to the ranging target around the vehicle (three-dimensional information) can be calculated using the method of triangulation by obtaining the parallax by mapping the same target portions in the left and right images captured by the left and right cameras using, such as a pattern matching method) [Nakata: para. 0005]; (It is also possible to use for determination, the traveling speed, the steering information, and the position information of the subject vehicle on the map obtained via the CAN interface 17) [Nakata: para. 0097]) associated with one or more ICs.
Nakata and Raskob do not explicitly disclose the following claim limitations (Emphasis added).
the 3D map comprises location information associated with one or more ICs.
However, in the same field of endeavor Shimizu further discloses the claim limitations as follows:
the 3D map comprises location information associated with one or more ICs (A camera 521 is installed in the vicinity of the ceiling of the end, which is opposite to the door) [Shimizu: col. 17, line 26-27; Figs. 9, 16-17].
It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Nakata and Raskob with Shimizu to program the system to implement of Shimizu’s method.
Therefore, the combination of Nakata and Raskob with Shimizu will enable the system to detect a moving object around the vehicle [Shimizu: col. 1, line 53-67].
Regarding claim 6, Nakata meets the claim limitations as set forth in claim 5. Nakata further meets the claim limitations as follow.
comparing comprises using an infrastructure system to match a first point that has been extracted from the first camera to a second point that has been extracted from the second camera to obtain a pair of matched points ((comparing the detection positions of the object by a plurality of camera images and another external sensor) [Nakata: para. 0061; Figs. 2-6]; (when at least one of the two monocular cameras is a camera that employs a rolling-shutter type image sensor such as a CMOS sensor (an image sensor that sequentially captures images by shifting exposure timing for each image capturing line on the light receiving surface), even when the start timings of the image capturing by the two monocular cameras are synchronized, the exposure timing of one monocular camera capturing an object from a certain position in a certain direction (that is, the vertical position of the image capturing line on the side of one monocular camera) and the exposure timing of the other monocular camera capturing the same object from another position in a different direction (that is, the vertical position of the image capturing line on the side of the other monocular camera) may differ. In this case, there is a problem that an error occurs in the distance measurement of the object due to the change in the relative positions of the vehicle (camera) and the ranging target that occurs during the period corresponding to the difference in the exposure timing ( difference in the vertical positions of the image capturing lines of both the cameras).) [Nakata: para. 0008], (When using the stereo cameras, since the relationship between positions and postures (viewing direction) of the left and right cameras are known, the distance to the ranging target around the vehicle (three-dimensional information) can be calculated using the method of triangulation by obtaining the parallax by mapping the same target portions in the left and right images captured by the left and right cameras using, such as a pattern matching method) [Nakata: para. 0005] – Note: Paragraph [0011] of the original specification explains the flatness score as follow: “Further, a disparity between the pair of matched points can be used to calculate a distance that is associated with a 3D position for the pair of matched points and is treated as the flatness score”. As a result, Nakata discloses this limitation).
Nakata does not explicitly disclose the following claim limitations (Emphasis added).
comparing comprises using an infrastructure system to match a first point that has been extracted from the first camera to a second point that has been extracted from the second camera to obtain a pair of matched points.
However, in the same field of endeavor Raskob further discloses the claim limitations as follows:
comparing comprises using an infrastructure system to match a first point that has been extracted from the first camera to a second point that has been extracted from the second camera to obtain a pair of matched points (At step 210, stereo correspondence is determined. Each image in the stereo pair is analyzed to determine points of interest. The points of interest are then compared to match the images. For example, building detection may be performed in the images and similar buildings in different locations in each image may be used to determine the correspondence between the stereo pair. In some embodiments, the differences are measured to determine the disparity between the images. Certain parameters of the images are evaluated to determine the disparity such as, in the example above, known buildings within the images. Images with similar characteristics, as measured from the image or from metadata included with the image, may be stored together in the image bins. The time difference may be the difference in days of acquisition of the images. The information obtained from the images may further be used in embodiments described below.) [Raskob: col. 7, line 4-20; Figs. 3, 9].
It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Nakata with Raskob to program the system to implement of Raskob’s method.
Therefore, the combination of Nakata with Raskob will enable the system to generate a complete and accurate geometrically optimized three-dimensional environment models [Raskob: col. 1, line 8-11, Abstract].
Regarding claim 8, Nakata meets the claim limitations as set forth in claim 6. Nakata further meets the claim limitations as follow.
using a disparity between the pair of matched points to calculate a distance associated with a 3D position for the pair of matched points ((comparing the detection positions of the object by a plurality of camera images and another external sensor) [Nakata: para. 0061; Figs. 2-6]; (when at least one of the two monocular cameras is a camera that employs a rolling-shutter type image sensor such as a CMOS sensor (an image sensor that sequentially captures images by shifting exposure timing for each image capturing line on the light receiving surface), even when the start timings of the image capturing by the two monocular cameras are synchronized, the exposure timing of one monocular camera capturing an object from a certain position in a certain direction (that is, the vertical position of the image capturing line on the side of one monocular camera) and the exposure timing of the other monocular camera capturing the same object from another position in a different direction (that is, the vertical position of the image capturing line on the side of the other monocular camera) may differ. In this case, there is a problem that an error occurs in the distance measurement of the object due to the change in the relative positions of the vehicle (camera) and the ranging target that occurs during the period corresponding to the difference in the exposure timing ( difference in the vertical positions of the image capturing lines of both the cameras).) [Nakata: para. 0008], and treating the distance as the flatness score ((When using the stereo cameras, since the relationship between positions and postures (viewing direction) of the left and right cameras are known, the distance to the ranging target around the vehicle (three-dimensional information) can be calculated using the method of triangulation by obtaining the parallax by mapping the same target portions in the left and right images captured by the left and right cameras using, such as a pattern matching method) [Nakata: para. 0005] – Note: Paragraph [0011] of the original specification explains the flatness score as follow: “Further, a disparity between the pair of matched points can be used to calculate a distance that is associated with a 3D position for the pair of matched points and is treated as the flatness score”. As a result, Nakata discloses this limitation).
Nakata does not explicitly disclose the following claim limitations (Emphasis added).
using a disparity between the pair of matched points to calculate a distance associated with a 3D position for the pair of matched points.
However, in the same field of endeavor Raskob further discloses the claim limitations as follows:
using a disparity between the pair of matched points to calculate a distance associated with a 3D position for the pair of matched points (At step 210, stereo correspondence is determined. Each image in the stereo pair is analyzed to determine points of interest. The points of interest are then compared to match the images. For example, building detection may be performed in the images and similar buildings in different locations in each image may be used to determine the correspondence between the stereo pair. In some embodiments, the differences are measured to determine the disparity between the images. Certain parameters of the images are evaluated to determine the disparity such as, in the example above, known buildings within the images. Images with similar characteristics, as measured from the image or from metadata included with the image, may be stored together in the image bins. The time difference may be the difference in days of acquisition of the images. The information obtained from the images may further be used in embodiments described below.) [Raskob: col. 7, line 4-20; Figs. 3, 9].
It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Nakata with Raskob to program the system to implement of Raskob’s method.
Therefore, the combination of Nakata with Raskob will enable the system to generate a complete and accurate geometrically optimized three-dimensional environment models [Raskob: col. 1, line 8-11, Abstract].
Regarding claim 11, Nakata meets the claim limitations as set forth in claim 1. Nakata further meets the claim limitations as follow.
using a spatial database to detect a landmark position in the image of the second camera ((a vehicle-mounted control device and a three-dimensional information acquisition method that acquire three-dimensional information around a vehicle based on a pair of still images captured by a pair of vehicle-mounted cameras.) [Nakata: para. 0001]; (a second camera) [Nakata: para. 0010])
Nakata does not explicitly disclose the following claim limitations (Emphasis added).
using a spatial database to detect a landmark position in the image of the second camera.
However, in the same field of endeavor Raskob further discloses the claim limitations as follows:
using a spatial database to detect a landmark position in the image (A third embodiment is directed to a method of creating an accurate and complete environmental model from a plurality of images, the method comprising the steps of, obtaining the plurality of images, recognizing at least one building in the plurality of images, determining a first height associated with the at least one building from a digital surface model, determining a first shape of the at least one building at the first height, determining a second height associated with the at least one building, determining a second shape of the at least one building at the second height, and combining the first shape and the second shape to generate a geometric model of the building.) [Raskob: col. 2, line 13-24; Figs. 3, 9].
It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Nakata with Raskob to program the system to implement of Raskob’s method.
Therefore, the combination of Nakata with Raskob will enable the system to generate a complete and accurate geometrically optimized three-dimensional environment models [Raskob: col. 1, line 8-11, Abstract].
Regarding claim 12, Nakata meets the claim limitations as set forth in claim 11. Nakata and Raskob further meet the claim limitations as follow.
using the landmark position ((When using the stereo cameras, since the relationship between positions and postures (viewing direction) of the left and right cameras are known, the distance to the ranging target around the vehicle (three-dimensional information) can be calculated using the method of triangulation by obtaining the parallax by mapping the same target portions in the left and right images captured by the left and right cameras using, such as a pattern matching method) [Nakata: para. 0005]; (It is also possible to use for determination, the traveling speed, the steering information, and the position information of the subject vehicle on the map obtained via the CAN interface 17) [Nakata: para. 0097]; (A third embodiment is directed to a method of creating an accurate and complete environmental model from a plurality of images, the method comprising the steps of, obtaining the plurality of images, recognizing at least one building in the plurality of images, determining a first height associated with the at least one building from a digital surface model, determining a first shape of the at least one building at the first height, determining a second height associated with the at least one building, determining a second shape of the at least one building at the second height, and combining the first shape and the second shape to generate a geometric model of the building.) [Raskob: col. 2, line 13-24; Figs. 3, 9]) determine an initial position of the IC.
Nakata and Raskob do not explicitly disclose the following claim limitations (Emphasis added).
using the landmark position to determine an initial position of the IC.
However, in the same field of endeavor Shimizu further discloses the claim limitations as follows:
using the landmark position to determine an initial position of the IC (A camera 521 is installed in the vicinity of the ceiling of the end, which is opposite to the door) [Shimizu: col. 17, line 26-27; Figs. 9, 16-17].
It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Nakata and Raskob with Shimizu to program the system to implement of Shimizu’s method.
Therefore, the combination of Nakata and Raskob with Shimizu will enable the system to detect a moving object around the vehicle [Shimizu: col. 1, line 53-67].
Regarding claim 13, Nakata meets the claim limitations as set forth in claim 3.
Nakata does not explicitly disclose the following claim limitations (Emphasis added).
determining a disparity between pairs of matched areas of the environment that are expected to be flat to assess a flatness of an area.
However, in the same field of endeavor Raskob further discloses the claim limitations as follows:
determining a disparity between pairs of matched areas of the environment that are expected to be flat to assess a flatness of an area (At step 210, stereo correspondence is determined. Each image in the stereo pair is analyzed to determine points of interest. The points of interest are then compared to match the images. For example, building detection may be performed in the images and similar buildings in different locations in each image may be used to determine the correspondence between the stereo pair. In some embodiments, the differences are measured to determine the disparity between the images. Certain parameters of the images are evaluated to determine the disparity such as, in the example above, known buildings within the images. Images with similar characteristics, as measured from the image or from metadata included with the image, may be stored together in the image bins. The time difference may be the difference in days of acquisition of the images. The information obtained from the images may further be used in embodiments described below.) [Raskob: col. 7, line 4-20; Figs. 3, 9].
It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Nakata with Raskob to program the system to implement of Raskob’s method.
Therefore, the combination of Nakata with Raskob will enable the system to generate a complete and accurate geometrically optimized three-dimensional environment models [Raskob: col. 1, line 8-11, Abstract].
Regarding claim 14, Nakata meets the claim limitations, as follows:
An image synchronization system ((a vehicle-mounted control device and a three-dimensional information acquisition method that acquire three-dimensional information around a vehicle based on a pair of still images captured by a pair of vehicle-mounted cameras.) [Nakata: para. 0001]; (the image capturing timings of the two monocular cameras are synchronized) [Nakata: para. 0007]) comprising: a first camera configured to capture a set of images (One type of external sensor used to assess the surrounding situation of the vehicle is a camera, which can acquire images of the surrounding situation) [Nakata: para. 0002]; a second camera configured to capture an image (One type of external sensor used to assess the surrounding situation of the vehicle is a camera, which can acquire images of the surrounding situation) [Nakata: para. 0002]; (When using the stereo cameras, since the relationship between positions and postures (viewing direction) of the left and right cameras are known, the distance to the ranging target around the vehicle (three-dimensional information) can be calculated using the method of triangulation by obtaining the parallax by mapping the same target portions in the left and right images captured by the left and right cameras using, such as a pattern matching method) [Nakata: para. 0005]); one or more processors (a computer including hardware such as a CPU or another arithmetic unit, a storage device such as semiconductor memory, and a communication device. Then, the arithmetic unit executes a predetermined program to realize each function of the camera signal processing unit 12, the camera recognition processing unit 13, and the like) [Nakata: para. 0086] configured to iteratively perform steps ((a predetermined program) [Nakata: para. 0086]; (performs electrical matching) [Nakata: para. 0087]; (FIGS. 11 and 12 are used to illustrate a specific configuration example of a vehicle-mounted control device 1 that executes a three-dimensional information acquisition process described so far. In the following, it is assumed that n cameras (cameras C1 to Cn) are mounted on the vehicle 100 to monitor each direction around the vehicle such that image capturing areas of respective cameras overlap. Each camera is connected to the vehicle-mounted control device 1) [Nakata: para. 0087; Figs. 11-12]), for pairs of images (pairs of images) [Nakata: para. 0115], the steps ((a predetermined program) [Nakata: para. 0086]; (performs electrical matching) [Nakata: para. 0087]; (FIGS. 11 and 12 are used to illustrate a specific configuration example of a vehicle-mounted control device 1 that executes a three-dimensional information acquisition process described so far) [Nakata: para. 0087; Figs. 11-12]) comprising:comparing each of a set of images obtained from a first camera with an image of a second camera (comparing the detection positions of the object by a plurality
of camera images and another external sensor) [Nakata: para. 0061; Figs. 2-6]; and calculating a flatness score ((When using the stereo cameras, since the relationship between positions and postures (viewing direction) of the left and right cameras are known, the distance to the ranging target around the vehicle (three-dimensional information) can be calculated using the method of triangulation by obtaining the parallax by mapping the same target portions in the left and right images captured by the left and right cameras using, such as a pattern matching method) [Nakata: para. 0005] – Note: Paragraph [0011] of the original specification explains the flatness score as follow: “Further, a disparity between the pair of matched points can be used to calculate a distance that is associated with a 3D position for the pair of matched points and is treated as the flatness score”. As a result, Nakata discloses this limitation) that is indicative of an error aggregated from synchronization of the pairs of images with known landmarks (when at least one of the two monocular cameras is a camera that employs a rolling-shutter type image sensor such as a CMOS sensor (an image sensor that sequentially captures images by shifting exposure timing for each image capturing line on the light receiving surface), even when the start timings of the image capturing by the two monocular cameras are synchronized, the exposure timing of one monocular camera capturing an object from a certain position in a certain direction (that is, the vertical position of the image capturing line on the side of one monocular camera) and the exposure timing of the other monocular camera capturing the same object from another position in a different direction (that is, the vertical position of the image capturing line on the side of the other monocular camera) may differ. In this case, there is a problem that an error occurs in the distance measurement of the object due to the change in the relative positions of the vehicle (camera) and the ranging target that occurs during the period corresponding to the difference in the exposure timing (difference in the vertical positions of the image capturing lines of both the cameras)) [Nakata: para. 0008] among the pairs of images (pairs of images) [Nakata: para. 0115], selecting a pair of images associated with a lowest error ((Compared with the imaging method in FIG. 6, which simultaneously captures the left and right image capturing lines projecting one specific point (the point P) of the ranging target OB, in the imaging method of the modification illustrated in FIGS. 7 A to 7C, the deviation of the image capturing timing between the left and right image capturing lines including the point P may be larger. Thus, the ranging error is also somewhat larger. Compared with the case where the exposure timing relationship between the left and right cameras is always fixed (for example, always using the combination AB in FIG. 7B), it is possible to suppress the ranging error because the image capturing timings are controlled according to a ranging target region. In order to suppress the ranging error even when the region is divided into smaller regions, it is sufficient that the region is divided into more regions and the difference in the exposure timing between the left camera and the right camera for each region is defined. Furthermore, it is conceivable to define the region by overlapping the region, select the region that best covers a target region desired to be distance measured, and define the difference) [Nakata: para. 0069-0070 – Note: Nakata discloses a method how to select a pair of images with the least error]; (In FIGS. 10A and 10B, for each point in the left image Il3, the corresponding points are illustrated in the right image Ir3a and the right image Ir3b, and in the right image Ir3a and the right image Ir3b, the ranging error at each point is indicated by an "o" or "x" around the point. Here, the magnitudes of "o" or "x" indicate the degree of error, with "o" indicating that the error appears as a distance closer than it actually is and "x" indicating that the error appears as a distance farther than it actually is) [Nakata: para. 0081] – Note: Nakata further discloses marks "o" and "x", which indicate the ranges of error. It is clear from this features that a pair of images with the least error can be selected from “o”, in which the error distance is smaller); and identifying the images of the selected pair of images as being synchronized ((Recent vehicles are equipped with monocular cameras mounted at various positions and in various orientations on the vehicle to monitor each direction around the vehicle. Therefore, by combining captured images from any two monocular cameras with overlapping image capturing areas, it is possible to acquire three-dimensional information around the vehicle using the method of triangulation described above, even in directions not monitored by the stereo camera. To do this, it is sufficient that the image capturing timings of the two monocular cameras are synchronized such that the overlapping image capturing areas are captured simultaneously) [Nakata: para. 0007] ; (When selecting the images, the stereo view target image selection unit 12g uses the information related to selection of the ranging target OB obtained from the camera signal processing operation control unit 12a. Taking the later processing into account, the images used for the distance measurement are paired, and information on the posture of the camera that acquired each image and the posture of the paired camera is also added. A plurality of pairs of two cameras may be selected simultaneously, and the distance measurement process may be performed at a later stage for each combination) [Nakata: para. 0108] ; (In FIGS. 10A and 10B, for each point in the left image Il3, t