DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Regarding the previous 35 USC 112(f) claim interpretation, Applicants remarks have been fully considered and there do not appear to be separate arguments regarding the previous 35 USC 112(f) claim interpretation. Accordingly, an updated 35 USC 112(f) claim interpretation is maintained in light of the present claim amendments.
Regarding the previous 35 USC 112(a) rejections, the previous 35 USC 112(a) rejection is withdrawn in light of the present claim amendments.
Regarding the previous 35 USC 112(b) rejections, the previous 35 USC 112(b) rejection is withdrawn in light of the present claim amendments.
Regarding the previous 35 USC 103 rejection, Applicants argue that the prior art of record does not disclose a computing system of a first location system configured to determine a vehicle position, and after the global position is provided by a second location system, update the global position of the vehicle based on the vehicle position. Applicant’s arguments have been fully considered but are not persuasive. Shashua discloses a first location system configured to determine a vehicle position and after a global position is provided, determining the global position of the vehicle based on the vehicle position (see at least [0481]: Once the relative position between the vehicle and the landmarks is found, the landmarks' world coordinates are taken from the HD map, and the vehicle can use them to compute its own location and pose, [0912]: Upon verifying the recognized super landmark, position determinations for the vehicle along a target trajectory may commence based on any of the landmarks included in a super landmark group, [0920]: Once a recognized landmark is identified based on an identified characteristic of the super landmark group, predetermined characteristics of the recognized landmark may be used to assist a host vehicle in navigation…recognized landmark may be used to determine a current position of the host vehicle, [0983]: location information representing a position of vehicle 7902 determined by…based on a position of vehicle 7902 relative to a recognized landmark) but does not explicitly recite updating the global position.
However, Goncalves teaches update the global position of the vehicle based on the vehicle position because Goncalves teaches raw pose data from dead reckoning sensors and that the SLAM module (604) outputs one or more poses (position and orientation) of the vehicle. Goncalves teaches that the SLAM module uses the change in pose information to update the one or more poses and maps 620 maintained (see at least abstract, [0103], [0108], Fig. 6: system architecture for VSLAM system; Examiner note: the previous typographical error of the abstract was US 2018018736 (“Jian”) but the cited Goncalves was the intended reference).
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date to provide the invention as disclosed by Shashua by incorporating the teachings of Goncalves with a reasonable expectation of success in order to improve determining of a position or orientation of a vehicle by advantageously compensating for drift in dead reckoning measurements and enhancing robustness and accuracy of one or more poses estimated by the SLAM module (604) (see at least abstract, [0103], [0108]). The combination would yield predictable results.
Further, the Conclusion header continues to include previously cited prior art Chen that also discloses a position update unit (see at least [0032]).
Accordingly, the 35 USC 103 rejection is maintained.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: a second location system configured to provide…; landmark identification module configured to identify…; computing system is configured to determine…; a communication module configured to: transmit… in claims 1-4, 6-15, 17-21, 23, 25, 27.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Applicant’s specification recites at least [0039]: secondary location systems (e.g., GPS), [0041]: location estimates can be provided by on-board global navigation systems (e.g., low- resolution GPS systems), dead-reckoning systems, or any other suitable secondary location system, [0055]: a remote computing system 220 (e.g., server system); [0058]: a processing system 215 (e.g., CPU, GPU, TPU, DSP etc.), storage (e.g., Flash, RAM, etc.), a communication subsystem (e.g., a radio, antenna, wireless data link, etc.)…Examples of the landmark detection system include: optical sensor(s) (e.g., monocular camera, stereo camera, multispectral camera, hyperspectral camera, visible range camera, UV camera, IR camera); antenna (e.g., BLE, WiFi, 3G, 4G, 5G, Zigbee, 802.11x, etc.), acoustic sensors (e.g., microphones, speakers), rangefinding systems (e.g., LIDAR, RADAR, sonar, TOF), or any other suitable sensor.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim(s) 1-4, 6-15, 17-21, 23, 25, 27 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claim 1 recites the limitations “An apparatus comprising a first location system, the first location system comprising…wherein the vehicle comprises a second location system configured to provide a global position of the vehicle…after the global position is provided by the second location system, update the global position of the vehicle based on the vehicle position” wherein Applicant’s remarks recites that the vehicle position is determined by the first location system. The closest reference in the specification includes [0039]: method 100 for precision localization includes: detecting a landmark proximal the vehicle; determining the vehicle position relative to the detected landmark; and determining a global system location based on the vehicle position relative to the detected landmark…secondary location systems (e.g., GPS). The specification does not appear to recite a first location system or providing the global system location. Further, the specification does not explicitly disclose the second location system provides the global position to the computing system comprised by the first location system. Applicant’s remarks do not appear to provide citations to the instant disclosure for support. Dependent claims are rejected as being dependent upon and failing to cure the deficiencies of the independent claim.
Claim 27 recites the limitation “wherein the landmark identification module is configured to obtain the global position of the vehicle, and to identify the landmarks based on the sensor signals from the sensor and based on the global position of the vehicle”. The closest reference in the specification recites [0003]: computing system comprising a landmark identification module configured to identify a landmark depicted in the image, wherein the landmark is associated with a landmark geographic location and a known dimension; wherein the computing system is configured to: determine a relative position between the vehicle and the landmark, update the global system location based on the relative position. The specification does not appear to describe the limitation.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 27 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 27 recites the limitation “ wherein the landmark identification module is configured to obtain the global position of the vehicle, and to identify the landmarks based on the sensor signals from the sensor and based on the global position of the vehicle” However, it is unclear that the landmark identification module obtains the global position of the vehicle and identify the landmarks based on the global position of the vehicle because the specification recites determine a relative position between the vehicle and the landmark, update the global system location based on the relative position (see at least instant specification [0003], [0008]-[0009], [0018], [0039]: determining a global system location based on the vehicle position relative to the detected landmark, [0089]). Accordingly, the metes and bounds required by the limitation in view of the specification are unclear.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-4, 6, 8-13, 17-18, 25, 27 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20190384294 (“Shashua”) in view of US 20040167716 (“Goncalves”) and US 20190101398 (“Mielenz”).
As per claim 1, Shashua discloses an apparatus comprising a first location system, the first location system comprising:
a sensor configured to be mounted to a vehicle, wherein the vehicle provides a global position of the vehicle (see at least [0444]: landmark may be visible within a field of view of a camera (e.g., camera 122) installed on each of vehicles 1205-1225. In some embodiments, camera 122 may capture an image of a landmark, [0481]: Once the relative position between the vehicle and the landmarks is found, the landmarks' world coordinates are taken from the HD map, and the vehicle can use them to compute its own location and pose, [0381]-[0382]: dead-reckoning); and
a computing system associated with the vehicle, the computing system comprising a landmark identification module configured to identify landmarks based on sensor signals from the sensor (see at least [0444]: A processor (e.g., processor 180, 190, or processing unit 110) provided on vehicle 1205 may process the image of the landmark to extract identification information for the landmark. The landmark identification information, rather than an actual image of the landmark, may be stored in sparse map 800);
wherein the computing system is configured to:
determine a vehicle position based on (1) a pattern of the identified landmarks and (2) a landmark fingerprint comprising pre-determined pattern of previously identified landmarks with respective known landmark locations (see at least [0575]: match landmarks types in both sequences, [0904]: group of two or more landmarks may be designated as a super landmark, [0911]: a sequence, which may be stored in sparse data map 800, of a speed limit sign at a distance D1, followed by a stop sign at a distance D2, and two traffic lights at a distance D3 from a host vehicle (where D3>D2>D1) may constitute a unique, recognizable characteristic of the super landmark that may aid in verifying speed limit sign 7790, for example, as a recognized landmark from sparse data map 800, [0919]: identification of the at least one landmark may be based, at least in part, upon a super landmark signature associated with the group of landmarks. A super landmark signature may be a signature for uniquely identifying a group of landmarks. In one embodiment, a super landmark signature may be based on one or more of the landmark group characteristics discussed above (e.g., number of landmarks, relative distance between landmarks, and ordering sequence of landmarks), [0912]: once the vehicle is located at a viewing location for which visual information for the super landmark is included in sparse data map 800, the processing unit of the vehicle can analyze images captured by one or more cameras onboard the vehicle to look for expected shapes, patterns, angles, segment lengths, etc. to determine whether a group of objects forms an expected super landmark. Upon verifying the recognized super landmark, position determinations for the vehicle along a target trajectory may commence based on any of the landmarks included in a super landmark group, [0920]: Once a recognized landmark is identified based on an identified characteristic of the super landmark group, predetermined characteristics of the recognized landmark may be used to assist a host vehicle in navigation…recognized landmark may be used to determine a current position of the host vehicle, [0983]: location information representing a position of vehicle 7902 determined by…based on a position of vehicle 7902 relative to a recognized landmark), and
after the global position is provided, determining the global position of the vehicle based on the vehicle position (see at least [0051]-[0052]: Determining the heading for the vehicle may include determining a previous location of the vehicle relative to the road junction based on the intersection of the directional indicators for the two or more landmarks; and determining the heading based on the previous location and the current location, [0077], [0080], [0381]-[0382]: dead-reckoning; the identified landmarks included in sparse map 800 may serve as navigational anchors from which an accurate position of the vehicle relative to a target trajectory may be determined, [0481]: Once the relative position between the vehicle and the landmarks is found, the landmarks' world coordinates are taken from the HD map, and the vehicle can use them to compute its own location and pose, [0983]: the road environment information may include one or more images captured by an image capture device of vehicle 7902; location information representing a position of vehicle 7902 determined by, for example, using position sensor 130 and/or based on a position of vehicle 7902 relative to a recognized landmark),
wherein the landmark identification module is configured to identify at least one of the landmarks based at least in part on a detected corner in the image, a detected edge in the image, a detected shape in the image, a detected color in the image, a contrast, or any combination of the foregoing (see at least [0010]: plurality of predetermined landmarks may be represented in the sparse map by parameters including landmark size, distance to previous landmark, landmark type, and landmark position, [0513]: general signs (e.g., a rectangular business sign that is associated with a unique signature, such as a color pattern). The identified landmark may be compared with the landmark stored in sparse map 800. When a match is found, the location of the landmark stored in sparse map 800 may be used as the location of the identified landmark, [0906]: A super landmark may be associated with one or more characteristics, such as distances between constituent landmarks, a number of landmarks in the group, an ordering sequence, one or more relative spatial relationships between the members of the landmark group, etc. Moreover, these characteristics may be used to generate a super landmark signature, [0912]: analyze images captured by one or more cameras onboard the vehicle to look for expected shapes, patterns, angles, segment lengths, etc. to determine whether a group of objects forms an expected super landmark); and
wherein the computing system is configured to provide an output for assisting a control of the vehicle (see at least [0513]: location of the identified landmark may be used for determining the location of the vehicle 1205 along a target trajectory, [0946]: a vehicle (which may be an autonomous vehicle) may travel on a roadway based on the road model and may make use of observations made by the self-aware system in order to adjust a navigational maneuver of the vehicle based on a navigational adjustment condition).
Shashua discloses after the global position is provided, determine the global system location based on the vehicle position and the global system location being a global position of the vehicle (see at least [0481]: Once the relative position between the vehicle and the landmarks is found, the landmarks' world coordinates are taken from the HD map, and the vehicle can use them to compute its own location and pose, [0482]: landmarks, together with an HD map, may enable to compute the precise vehicle pose in global coordinates, [0513]: general signs (e.g., a rectangular business sign that is associated with a unique signature, such as a color pattern). The identified landmark may be compared with the landmark stored in sparse map 800. When a match is found, the location of the landmark stored in sparse map 800 may be used as the location of the identified landmark).
Shashua does not explicitly recite after the global position is provided by the second location system, update the global position of the vehicle based on the vehicle position (see at least [0912]: Upon verifying the recognized super landmark, position determinations for the vehicle along a target trajectory may commence based on any of the landmarks included in a super landmark group, [0920]: Once a recognized landmark is identified based on an identified characteristic of the super landmark group, predetermined characteristics of the recognized landmark may be used to assist a host vehicle in navigation…recognized landmark may be used to determine a current position of the host vehicle, [0983]: location information representing a position of vehicle 7902 determined by…based on a position of vehicle 7902 relative to a recognized landmark).
However, Goncalves teaches after the global position is provided by the second location system, update the global position of the vehicle based on the vehicle position (see at least abstract: use a visual sensor and dead reckoning sensors to process Simultaneous Localization and Mapping (SLAM). These techniques can be used in robot navigation. Advantageously, such visual techniques can be used to autonomously generate and update a map, [0103]: Pre-filtering of data to the SLAM module 604 can advantageously enhance the robustness and accuracy of one or more poses (position and orientation) and maps 620 estimated by the SLAM module 604, [0108]: SLAM module 604 uses the change in pose information to update the one or more poses and maps 620 maintained. Accordingly, the visually observed landmarks can advantageously compensate for drift in dead reckoning measurements).
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date to provide the invention as disclosed by Shashua by incorporating the teachings of Goncalves with a reasonable expectation of success in order to improve determining of a position or orientation of a vehicle.
Shashua does not appear to explicitly disclose determining a vehicle position based on processing more than one landmark at a time.
However, Mielenz teaches determine a vehicle position based on (1) a pattern of the identified landmarks and (2) a landmark fingerprint comprising pre-determined pattern of previously identified landmarks with respective known landmark locations (see at least [0001]: determining, with the aid of landmarks, a position and orientation of a vehicle moving in an environment in an at least partially automated manner; where the vehicle is moved in the environment, and through which a sequence of localization scenarios is generated, and landmark data being digitally processed by at least one vehicle control system, in order to control the position and orientation of the vehicle, [0020]: number and the distribution of the landmarks are a function of the localization accuracy to be attained and are selected accordingly by the method of the present invention. The quantity of possible landmarks and their position relative to the vehicle are known from the map, which is loaded from the data storage unit. The detected landmarks are recorded with the landmarks from the map with the aid of a matching algorithm. The localization accuracy is repeatedly checked by the above-mentioned algorithm, claim 14: at least one landmarks allows an adequate determination of the attitude of the vehicle using a minimal number, and a quantity of the at least one landmark is selected from a quantity of available landmarks in order to determine the attitude).
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date to provide the invention as disclosed by Shashua by incorporating the teachings of Mielenz with a reasonable expectation of success in order to guide a vehicle in a safe manner.
As per claim 2, Shashua discloses wherein the computing system is configured to determine the vehicle position based on a landmark parameter (see at least abstract, [0399], [0457], [0618]: when processor unit 110 receives images captured by the onboard camera, those images may be analyzed by searching for an object at the expected location of a recognized landmark from sparse map 800; Further confirmation may be obtained, for example, by analyzing the image to determine what text or graphics appear on the sign in the captured images. Through textual or graphics recognition processes, the processor unit may determine that the rectangular shape in the captured image includes the text “Speed Limit 55.” By comparing the captured text to a type code associated with the recognized landmark stored in sparse data map 800 (e.g., a type indicating that the next landmark to be encountered is a speed limit sign), this information can further verify that the observed object in the captured images is, in fact, the expected recognized landmark, [0991]: some geographic regions may include road segments for which sparse data model 800 already includes refined target trajectories, landmark representations, landmark positions, etc. For example, in certain geographic regions (e.g., urban environments, heavily traveled roadways, etc.), sparse data model 800 may be generated based upon multiple traversals of various road segments by vehicles in a data collection mode; server may receive transmissions from only those vehicles in a geographic location that the server identifies and queries for updated road information. The server can use information received from any portion of the vehicles from a certain geographic region to verify and/or update any aspect of sparse data model 800).
As per claim(s) 3, Shashua discloses wherein the landmark parameter comprises a textual location indicator (see at least abstract, [0399], [0457], [0618]: when processor unit 110 receives images captured by the onboard camera, those images may be analyzed by searching for an object at the expected location of a recognized landmark from sparse map 800; Further confirmation may be obtained, for example, by analyzing the image to determine what text or graphics appear on the sign in the captured images. Through textual or graphics recognition processes, the processor unit may determine that the rectangular shape in the captured image includes the text “Speed Limit 55.” By comparing the captured text to a type code associated with the recognized landmark stored in sparse data map 800 (e.g., a type indicating that the next landmark to be encountered is a speed limit sign), this information can further verify that the observed object in the captured images is, in fact, the expected recognized landmark, [0991]: some geographic regions may include road segments for which sparse data model 800 already includes refined target trajectories, landmark representations, landmark positions, etc. For example, in certain geographic regions (e.g., urban environments, heavily traveled roadways, etc.), sparse data model 800 may be generated based upon multiple traversals of various road segments by vehicles in a data collection mode; server may receive transmissions from only those vehicles in a geographic location that the server identifies and queries for updated road information. The server can use information received from any portion of the vehicles from a certain geographic region to verify and/or update any aspect of sparse data model 800).
As per claim(s) 4, Shashua discloses wherein the computing system is configured to determine a unit region based on the textual location indicator, and retrieve a landmark geographic location from a storage based on the unit region (see at least abstract, [0399], [0457], [0618]: when processor unit 110 receives images captured by the onboard camera, those images may be analyzed by searching for an object at the expected location of a recognized landmark from sparse map 800; Further confirmation may be obtained, for example, by analyzing the image to determine what text or graphics appear on the sign in the captured images. Through textual or graphics recognition processes, the processor unit may determine that the rectangular shape in the captured image includes the text “Speed Limit 55.” By comparing the captured text to a type code associated with the recognized landmark stored in sparse data map 800 (e.g., a type indicating that the next landmark to be encountered is a speed limit sign), this information can further verify that the observed object in the captured images is, in fact, the expected recognized landmark, [0991]: some geographic regions may include road segments for which sparse data model 800 already includes refined target trajectories, landmark representations, landmark positions, etc. For example, in certain geographic regions (e.g., urban environments, heavily traveled roadways, etc.), sparse data model 800 may be generated based upon multiple traversals of various road segments by vehicles in a data collection mode; server may receive transmissions from only those vehicles in a geographic location that the server identifies and queries for updated road information. The server can use information received from any portion of the vehicles from a certain geographic region to verify and/or update any aspect of sparse data model 800).
As per claim(s) 6, Shashua discloses wherein the apparatus is configured to determine a landmark pose associated with each of the landmarks (see at least [0010]: plurality of predetermined landmarks may be represented in the sparse map by parameters including landmark size, distance to previous landmark, landmark type, and landmark position, [0047]-[0052]: analyze the at least one image to identify two or more landmarks located in the environment of the vehicle, [0371]: classify certain road features, [0381]-[0382]: vehicle may use landmarks occurring in sparse map 800 (and their known locations) to remove the dead reckoning-induced errors in position determination, [0444]: landmark may be visible within a field of view of a camera (e.g., camera 122) installed on each of vehicles 1205-1225. In some embodiments, camera 122 may capture an image of a landmark. A processor (e.g., processor 180, 190, or processing unit 110) provided on vehicle 1205 may process the image of the landmark to extract identification information for the landmark. The landmark identification information, rather than an actual image of the landmark, may be stored in sparse map 800, [0446]-[0449]: identification of the landmark may include a size of the landmark, [0671]: processing unit 110 of vehicle 200 may be configured to determine positions 4024, 4026 of landmarks 4016, 4018, respectively, relative to vehicle 200. Processing unit 110 may also be configured to determine directional indicators 4036, 4038 of landmarks 4016, 4018 relative to vehicle 200, [0690]: orientation); and
wherein the computing system is configured to determine a set of relative positions between the vehicle and each of the landmarks (see at least abstract, [0029], [0036]-[0040]: navigation between recognized landmarks may include integration of vehicle velocity to determine a location of the vehicle along the predetermined road model trajectory, [0067], [0346], [0351], [0381]-[0382]: vehicle may use landmarks occurring in sparse map 800 (and their known locations) to remove the dead reckoning-induced errors in position determination, [0444], [0446]-[0449]: localization of vehicle may be corrected or adjust by image observations of landmarks, [0594], [0671]-[0674]: processing unit 110 of vehicle 200 may be configured to determine positions 4024, 4026 of landmarks 4016, 4018, respectively, relative to vehicle 200. Processing unit 110 may also be configured to determine directional indicators 4036, 4038 of landmarks 4016, 4018 relative to vehicle 200, [0713]-[0715]).
As per claim(s) 8, Shashua discloses wherein the computing system is configured to determine another vehicle position based on the landmarks or some of the landmarks (see at least [0575]: match landmarks types in both sequences, [0904]: group of two or more landmarks may be designated as a super landmark, [0911]: a sequence, which may be stored in sparse data map 800, of a speed limit sign at a distance D1, followed by a stop sign at a distance D2, and two traffic lights at a distance D3 from a host vehicle (where D3>D2>D1) may constitute a unique, recognizable characteristic of the super landmark that may aid in verifying speed limit sign 7790, for example, as a recognized landmark from sparse data map 800, [0919]: identification of the at least one landmark may be based, at least in part, upon a super landmark signature associated with the group of landmarks. A super landmark signature may be a signature for uniquely identifying a group of landmarks. In one embodiment, a super landmark signature may be based on one or more of the landmark group characteristics discussed above (e.g., number of landmarks, relative distance between landmarks, and ordering sequence of landmarks), [0912]: once the vehicle is located at a viewing location for which visual information for the super landmark is included in sparse data map 800, the processing unit of the vehicle can analyze images captured by one or more cameras onboard the vehicle to look for expected shapes, patterns, angles, segment lengths, etc. to determine whether a group of objects forms an expected super landmark. Upon verifying the recognized super landmark, position determinations for the vehicle along a target trajectory may commence based on any of the landmarks included in a super landmark group, [0920]: Once a recognized landmark is identified based on an identified characteristic of the super landmark group, predetermined characteristics of the recognized landmark may be used to assist a host vehicle in navigation…recognized landmark may be used to determine a current position of the host vehicle, [0983]: location information representing a position of vehicle 7902 determined by…based on a position of vehicle 7902 relative to a recognized landmark).
As per claim(s) 9, Shashua discloses wherein the computing system is configured to retrieve a landmark geographic location of one of the landmarks from a remote database (see at least [0725]: sparse map 800 may be stored remotely, [0912]: once the vehicle is located at a viewing location for which visual information for the super landmark is included in sparse data map 800, the processing unit of the vehicle can analyze images captured by one or more cameras onboard the vehicle to look for expected shapes, patterns, angles, segment lengths, etc. to determine whether a group of objects forms an expected super landmark. Upon verifying the recognized super landmark, position determinations for the vehicle along a target trajectory may commence based on any of the landmarks included in a super landmark group).
As per claim(s) 10, Shashua discloses wherein the computing system is configured to determine unit region based on the global system location, and retrieve a landmark geographic location from a storage based on the unit region (see at least abstract, [0457], [0991]: some geographic regions may include road segments for which sparse data model 800 already includes refined target trajectories, landmark representations, landmark positions, etc. For example, in certain geographic regions (e.g., urban environments, heavily traveled roadways, etc.), sparse data model 800 may be generated based upon multiple traversals of various road segments by vehicles in a data collection mode; server may receive transmissions from only those vehicles in a geographic location that the server identifies and queries for updated road information. The server can use information received from any portion of the vehicles from a certain geographic region to verify and/or update any aspect of sparse data model 800).
As per claim(s) 11, Shashua discloses wherein the computing system is configured to determine a route (see at least abstract, [0485]: reconstructed route or trajectory, [0516]: determine…trajectory..., [0920]: current position relative to a target trajectory may aid in determining a steering angle needed to cause the vehicle to follow the target trajectory (for example, by comparing a heading direction to a direction of the target trajectory at the determined current position of the vehicle relative to the target trajectory), [0991]: some geographic regions may include road segments for which sparse data model 800 already includes refined target trajectories, landmark representations, landmark positions, etc. For example, in certain geographic regions (e.g., urban environments, heavily traveled roadways, etc.), sparse data model 800 may be generated based upon multiple traversals of various road segments by vehicles in a data collection mode; server may receive transmissions from only those vehicles in a geographic location that the server identifies and queries for updated road information. The server can use information received from any portion of the vehicles from a certain geographic region to verify and/or update any aspect of sparse data model 800).
As per claim(s) 12, Shashua discloses wherein the vehicle comprises an autonomous vehicle (see at least abstract, [0485]: reconstructed route or trajectory, [0516]: determine…trajectory..., [0920]: current position relative to a target trajectory may aid in determining a steering angle needed to cause the vehicle to follow the target trajectory (for example, by comparing a heading direction to a direction of the target trajectory at the determined current position of the vehicle relative to the target trajectory), [0991]: some geographic regions may include road segments for which sparse data model 800 already includes refined target trajectories, landmark representations, landmark positions, etc. For example, in certain geographic regions (e.g., urban environments, heavily traveled roadways, etc.), sparse data model 800 may be generated based upon multiple traversals of various road segments by vehicles in a data collection mode; server may receive transmissions from only those vehicles in a geographic location that the server identifies and queries for updated road information. The server can use information received from any portion of the vehicles from a certain geographic region to verify and/or update any aspect of sparse data model 800).
As per claim(s) 13, Shashua discloses wherein the computing system is configured to determine a speed value and/or a heading value associated with the vehicle, and wherein the speed value is associated with a speed error, and wherein the heading value is associated with a heading error (see at least abstract, [0350]: heading error tracking control loop, [0382]: vehicle may use landmarks occurring in sparse map 800 (and their known locations) to remove the dead reckoning-induced errors in position determination, [0869]: yield a correction factor needed to adjust/calibrate the vehicle's speed sensor to match the speed determined based on the S1 to S2 speed calculation).
As per Claim 17, Shashua discloses wherein the computing system is also configured to: determine a temporal pattern; determine a vehicle trajectory based on the temporal pattern (see at least [0340]: Processing unit 110 may calculate the motion of candidate objects by observing the different positions of the objects across multiple image frames, which are captured at different times. Processing unit 110 may use the position and time values as inputs into mathematical models for calculating the motion of the candidate objects, [0438]: trajectory may be reconstructed based on at least one of accelerometer data, speed data, landmarks data, road geometry or profile data, vehicle positioning data, and ego motion data, [0446]: the distance to landmark may be estimated by Z=V*dt*R/D, where V is the speed of vehicle, R is the distance in the image from the landmark at time t1 to the focus of expansion, and D is the change in distance for the landmark in the image from t1 to t2. dt represents the (t2−t1). For example, the distance to landmark may be estimated by Z=V*dt*R/D, where V is the speed of vehicle, R is the distance in the image between the landmark and the focus of expansion, dt is a time interval, and D is the image displacement of the landmark along the epipolar line, [0523]: processor 1715 may analyze images from camera 122, speed from speed sensor 1720, position information from GPS unit 1710, motion data from accelerometer 1725, to determine an actual trajectory, [0545]: landmark density/frequency (e.g., detected or stored landmarks over a predetermined distance), [0801]: From a time when that recognized landmark first comes into view of the forward facing camera until a time when the vehicle has passed the recognized landmark (or the landmark has otherwise passed out of the field of view of the forward facing camera), navigation can proceed based on images captured of the recognized landmark (e.g., based on any of the techniques described above), [0869]: The first recognized landmark may be used to determine a first location S1 of the vehicle along a target trajectory at time T1, and the second recognized landmark may be used to determine a second location S2 of the vehicle along the target trajectory at time T2. Using information such as a measured distance between S1 and S2 and knowing a time difference between T1 and T2 may enable the processor unit of the vehicle to determine a speed over which the distance between S1 and S2 was covered).
As per Claim 18, Shashua discloses wherein the computing system is also configured to: determine a temporal pattern based at least in part on the landmark parameter (see at least [0340]: Processing unit 110 may calculate the motion of candidate objects by observing the different positions of the objects across multiple image frames, which are captured at different times. Processing unit 110 may use the position and time values as inputs into mathematical models for calculating the motion of the candidate objects, [0446]: the distance to landmark may be estimated by Z=V*dt*R/D, where V is the speed of vehicle, R is the distance in the image from the landmark at time t1 to the focus of expansion, and D is the change in distance for the landmark in the image from t1 to t2. dt represents the (t2−t1). For example, the distance to landmark may be estimated by Z=V*dt*R/D, where V is the speed of vehicle, R is the distance in the image between the landmark and the focus of expansion, dt is a time interval, and D is the image displacement of the landmark along the epipolar line, [0523]: processor 1715 may analyze images from camera 122, speed from speed sensor 1720, position information from GPS unit 1710, motion data from accelerometer 1725, to determine an actual trajectory, [0545]: landmark density/frequency (e.g., detected or stored landmarks over a predetermined distance), [0801]: From a time when that recognized landmark first comes into view of the forward facing camera until a time when the vehicle has passed the recognized landmark (or the landmark has otherwise passed out of the field of view of the forward facing camera), navigation can proceed based on images captured of the recognized landmark (e.g., based on any of the techniques described above), [0869]: The first recognized landmark may be used to determine a first location S1 of the vehicle along a target trajectory at time T1, and the second recognized landmark may be used to determine a second location S2 of the vehicle along the target trajectory at time T2. Using information such as a measured distance between S1 and S2 and knowing a time difference between T1 and T2 may enable the processor unit of the vehicle to determine a speed over which the distance between S1 and S2 was covered).
As per Claim 25, Shashua discloses wherein the computing system is configured to determine the vehicle position based on a matching between (1) the pattern of the identified landmarks and (2) the pre-determined pattern of the previously identified landmarks (see at least [0911]: Any of the recognized landmarks included in the super landmark group may be identified based on recognition of various relationships between the landmarks included in the group. For example, a sequence, which may be stored in sparse data map 800, of a speed limit sign at a distance D1, followed by a stop sign at a distance D2, and two traffic lights at a distance D3 from a host vehicle (where D3>D2>D1) may constitute a unique, recognizable characteristic of the super landmark that may aid in verifying speed limit sign 7790, for example, as a recognized landmark from sparse data map 800, [0912]: Other relationships between the members of a super landmark may also be stored in sparse data map 800. For example, at a particular predetermined distance from recognized landmark 7790 and along a target trajectory associated with the road segment, the super landmark may form a polynomial 7794 between points A, B, C, and D each associated with a center of a member of the super landmark).
As per Claim 27, Shashua discloses wherein the landmark identification module is configured to obtain the global position of the vehicle, and to identify the landmarks based on the sensor signals from the sensor and based on the global position of the vehicle (see at least [0444]: landmark may be visible within a field of view of a camera (e.g., camera 122) installed on each of vehicles 1205-1225. In some embodiments, camera 122 may capture an image of a landmark, [0481]: Once the relative position between the vehicle and the landmarks is found, the landmarks' world coordinates are taken from the HD map, and the vehicle can use them to compute its own location and pose, [0381]-[0382]: dead-reckoning).
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shashua in view of Goncalves and Mielenz, and further in view of US 2002/0198632 (“Breed”).
As per Claim 7, Shashua does not explicitly disclose wherein the computing system further comprises a communication module configured to transmit the global system location to another vehicle.
However, Breed teaches wherein the computing system further comprises a communication module configured to transmit the global system location to another vehicle (see at least [0149]: vehicle to vehicle communications can be used to transmit DGPS corrections from one vehicle to another whether the source is a central DGPS system or one based on PPS or other system, [0181]-[0192]: To provide a means whereby vehicles near each other can communicate their position and/or their velocity to each other and thereby reduce the risk of a collision).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the invention as disclosed by Shashua by incorporating the teachings of Breed with a reasonable expectation of success in order to provide a means whereby vehicles near each other can communicate their position and/or their velocity to each other and thereby reduce the risk of collision (see at least Breed [0181]-[0192]).
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shashua in view of Goncalves and Mielenz, and further in view of US 8,301,374 (“Surampudi”) and US 2010/0220008 (“Conover”).
As per claim(s) 14, Shashua discloses wherein the computing system is configured to repeatedly update a position of the vehicle based on the speed value, and/or repeatedly update a location error based on the speed error and the heading error (see at least abstract, [0036]: Navigation between recognized landmarks may include integration of vehicle velocity to determine a location of the vehicle along the predetermined road model trajectory, [0350]: heading error tracking control loop, [0382]: vehicle may use landmarks occurring in sparse map 800 (and their known locations) to remove the dead reckoning-induced errors in position determination, [0459]: determine the current position and hence heading of vehicles 1205-1225 by using previously determined position, estimated speed, [0869]: yield a correction factor needed to adjust/calibrate the vehicle's speed sensor to match the speed determined based on the S1 to S2 speed calculation, [0922]: determine a location along the predetermined road model trajectory based on a vehicle velocity), but does not explicitly disclose wherein the computing system is configured to repeatedly update the global position of the vehicle based on the speed value and the heading value, and/or repeatedly update a location error based on the speed error and the heading error.
However, Surampudi discloses wherein the computing system is configured to repeatedly update the global position of the vehicle based on the speed value and heading value (see at least column 4 lines 20-50: When no landmark is currently identified, the position of the vehicle is given by some form of vehicle state estimation such as: y.sub.est=.intg.v cos(H.sub.est)dt x.sub.est=.intg.v sin(H.sub.est)dt H.sub.est=.intg.[dot over (.psi.)]dt , where x.sub.est, y.sub.est are the estimated location (longitude and latitude with respect to a local origin), v is vehicle speed, H.sub.est, is the estimated heading of the vehicle, and [dot over (.psi.)] the measured yaw rate of the vehicle; When a vehicle passes a landmark that can be detected, identified, and located, the landmark's location values are read from database 104. y.sub.est=y.sub.landmark+y.sub.offset x.sub.est=x.sub.landmark+x.sub.offset H.sub.est=.theta..sub.landmark+.theta..sub.offset , where x.sub.landmark, y.sub.landmark, .theta..sub.landmark are the true location (longitude, latitude and heading with respect to a local origin) of the landmark. x.sub.offset, y.sub.offset, .theta..sub.offset are the offsets of vehicle from the landmark in space and time. Typically, the offsets can be easily computed using basic geometry and image comparisons).
It would have been obvious to one of ordinary skill in the art before the effective filing date to provide the invention as disclosed by Shashua by incorporating the teachings of Surampudi with a reasonable expectation of success in order to enhance position estimation accuracy.
However, Conover teaches repeatedly update a location error based on the speed error and the heading error (see at least abstract, [0006], claim 14: wherein the integrator is configured to: a. difference signals of the second input with the Kalman filter's current estimate of corresponding states to produce position, velocity and heading error signal observations; b. update the states and covariances according to the observations utilizing the corresponding Kalman Gain Matrix and GPS Observation Matrix; and c. propagate the updated states and covariances to the time of the next observation).
It would have been obvious to one of ordinary skill in the art before the effective filing date to provide the invention as disclosed by Shashua by incorporating the teachings of Conover with a reasonable expectation of success in order to integrate position measurements, estimate errors in the full trajectory variables, and output optimal trajectory estimates of position, velocity and true heading-from-north ("heading") as well as corrective factors to mitigate portions of the measurement errors from various sources.
Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shashua in view of Goncalves and Mielenz, and further in view of US 2014/0324310 (“Kobayashi”).
As per Claim 15, Shashua does not explicitly disclose wherein the computing system is configured to obtain a wheel RPM value from a rotations-per-minute (RPM) sensor and wherein the computing system is configured to determine a speed value based on the wheel RPM value in combination with a known wheel diameter.
However, Kobayashi teaches wherein the computing system is configured to obtain a wheel RPM value from a rotations-per-minute (RPM) sensor and wherein the computing system is configured to determine a speed value based on the wheel RPM value in combination with a known wheel diameter (see inter alia abstract, [0036]: vehicle speed computing unit 102 calculates a vehicle speed based on wheel speeds (rotational speeds of road wheels) detected by the wheel speed sensors 11 and diameters of tires of the vehicle 1. When calculating a vehicle speed, a filtering process or an averaging process may be done, if needed. In addition, the vehicle speed computing unit 102 calculates a travel distance of the vehicle 1 by integrating the vehicle speed).
It would have been obvious to one of ordinary skill in the art before the effective filing date to provide the invention as disclosed by Shashua by incorporating wherein the computing system is configured to obtain a wheel RPM value from a rotations-per-minute (RPM) sensor and wherein the computing system is configured to determine a speed value based on the wheel RPM value in combination with a known wheel diameter as taught by Kobayashi with a reasonable expectation of success in order to calculate a travel distance of the vehicle by integrating the vehicle speed and to execute parking operation assist for an obstacle around the vehicle more adequately.
Claim(s) 19 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shashua in view of Goncalves and Mielenz, and further in view of US 20140253375 (“Rudow”).
As per Claim 19, Shashua discloses wherein the sensor comprises a camera, wherein the apparatus is configured to record an image from the camera (see at least [0004]: analysis of images captured by one or more of the cameras) but does not explicitly disclose wherein the apparatus is configured to record the image based on a satisfaction of a criterion.
However, Rudow teaches wherein the apparatus is configured to record the image based on a satisfaction of a criterion (see at least [0589], [0594]: if the current image is the first image or if movement the image capturing device exceeds a distance threshold, then reset the image capturing device to use the current image as a reference image. For example, the current image is used as a new reference image when the image capturing device 1540G movement exceeds a distance threshold since the current reference image was taken or since the previous image, depending on how far the image capturing device 1540G has been moved, or how much time has passed. The creation of an association between the current image and one or more previous images will be interfered with if the image capturing device 1540G has been moved too far or if too much time has passed. For example, a range of a distance threshold is approximately 3 to 10 feet when a new reference image may be taken. Typically a new reference image is taken and used if the image capturing device 1540G has moved about 10 feet or more).
It would have been obvious to one of ordinary skill in the art before the effective filing date to provide the invention as disclosed by Shashua by incorporating the teachings of Rudow with a reasonable expectation of success in order to improve position determination of a cellular device using locally measured movement information from an image capturing device.
As per Claim 20, Shashua does not explicitly disclose wherein the criterion is associated with a threshold.
However, Rudow teaches wherein the criterion is associated with a threshold (see at least [0589], [0594]: if the current image is the first image or if movement the image capturing device exceeds a distance threshold, then reset the image capturing device to use the current image as a reference image. For example, the current image is used as a new reference image when the image capturing device 1540G movement exceeds a distance threshold since the current reference image was taken or since the previous image, depending on how far the image capturing device 1540G has been moved, or how much time has passed. The creation of an association between the current image and one or more previous images will be interfered with if the image capturing device 1540G has been moved too far or if too much time has passed. For example, a range of a distance threshold is approximately 3 to 10 feet when a new reference image may be taken. Typically a new reference image is taken and used if the image capturing device 1540G has moved about 10 feet or more).
It would have been obvious to one of ordinary skill in the art before the effective filing date to provide the invention as disclosed by Shashua by incorporating the teachings of Rudow with a reasonable expectation of success in order to improve position determination of a cellular device using locally measured movement information from an image capturing device.
Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shashua in view of Goncalves and Mielenz, and further in view of US 2012/0310516 (“Zeng”).
As per Claim 21, Shashua discloses information may be uploaded to the server at a predetermined periodic rate (e.g., several times per second, once per second, once per minute, once every several minutes, once per hour, or any other suitable time interval) (see at least [0986]) but does not explicitly disclose wherein the computing system is configured to update the global system location based on a predetermined frequency.
However, Zeng teaches wherein the computing system is configured to update the global position of the vehicle based on a predetermined frequency (see at least abstract, [0043]: determine or update the location of vehicle x.sub.t+1 and measured landmark f' relative to a sub-map. The calculated x.sub.t+1 and f' values may be approximate, best fit, or minimized solutions to the weighted least square equation. Current vehicle position x.sub.t+1 and object location f' may be calculated at regular intervals or time steps, or at a predefined update rate. Current vehicle position x.sub.t+1 and object location f' may, for example, be calculated every 100 milliseconds or at other intervals).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the invention as disclosed by Shashua by incorporating the teachings of Zeng with a reasonable expectation of success in order to enhance the accuracy of vehicle location measurement.
Claim(s) 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shashua in view of Goncalves and Mielenz, and further in view of US 20080167814 (“Samarasekera”).
As per Claim 23, Shashua discloses successive images, 3D reconstruction of a scene, and wherein the pattern of the identified landmarks comprises a distance pattern (see at least [0322], [0340]: [0340]: Processing unit 110 may calculate the motion of candidate objects by observing the different positions of the objects across multiple image frames, which are captured at different times. Processing unit 110 may use the position and time values as inputs into mathematical models for calculating the motion of the candidate objects, [0533]: identifier may include a distance of the landmark relative to another landmark, [0911]: a sequence, which may be stored in sparse data map 800, of a speed limit sign at a distance D1, followed by a stop sign at a distance D2, and two traffic lights at a distance D3 from a host vehicle (where D3>D2>D1) may constitute a unique, recognizable characteristic of the super landmark that may aid in verifying speed limit sign 7790, for example, as a recognized landmark from sparse data map 800, [0919]: identification of the at least one landmark may be based, at least in part, upon a super landmark signature associated with the group of landmarks. A super landmark signature may be a signature for uniquely identifying a group of landmarks. In one embodiment, a super landmark signature may be based on one or more of the landmark group characteristics discussed above (e.g., number of landmarks, relative distance between landmarks, and ordering sequence of landmarks)) but does not explicitly disclose wherein the pattern of the identified landmarks comprises a temporal pattern.
However, Samarasekera teaches wherein identified landmarks comprises a temporal pattern (see at least [0065]: given a time stamp t, the first step of the proposed algorithm is to detect and track a set of natural landmarks from the images of the forward and backward stereo pairs individually…extracted landmarks from both stereo pairs at the current time stamp (t) are used to search the landmark database for their most similar landmarks via the efficient database matching technique).
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the invention as disclosed by Shashua by incorporating the teachings of Samarasekera with a reasonable expectation of success in order to search the landmark database for the most similar landmarks.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20120299702 (“Krishna”) (see at least Fig. 4, [0059]: process commences upon receipt of machine location information by a processor 111 of ECM 110 (Step 410)…machine location information may include GPS location information received by transceiver 131 from an orbital positioning satellite);
US 20140236477 (“Chen”) (see at least [0027]: based on perception data 58, PBL unit 50 may update the estimated position of machine 10 generated by position prediction unit 54 by utilizing position update unit 68…new estimated position generated by position update unit 68 may correct for errors that may have accumulated in the estimated position generated by position prediction unit 54, [0028]: Object detection unit 60 may utilize perception data 58 from object detection device 36 to determine positions of detected objects relative to the position of object detection device 36/machine 10, [0030]: Position update unit 68 may update the estimated position generated by position prediction unit 54 based on matching objects 65…Position update unit 68 may then compare the relative positions of the known objects to measurements of the relative position of matching objects 65 in relation to machine 10 as detected by object detection device 36. Based on errors between the relative positions of known objects surrounding the a priori estimated position and the measured relative positions of matching objects 65 surrounding machine 10, position update unit 68 may generate a refined a posteriori estimated position of machine 10, [0032]: Position determination unit 56 may receive the estimated positions generated by position prediction unit 54 and position update unit 68 and output a position 69 of machine 10 based on those estimated positions…when position determination unit 56 utilizes the updated estimated position from position update unit 68 to determine output position 69, the error in output position 69 is reset to be no greater than thirty centimeters, [0044]: Object verification unit 76 may extract the dimensions, shape, aspect ratio, color, and/or any other physical characteristics associated with unmatched object 72 from the captured image, and compare the extracted characteristics to a database of objects; Examiner interprets the error in output position reset as updating the location error to a value as a result from updating the estimated position; the claim limitations are interpreted under broadest reasonable interpretation and Applicant’s specification does not appear to provide additional information regarding the limitation “set a location error to a value after the global system location is updated based on the relative position between the vehicle and the landmark”);
US 8,442,791 (“Stahlin”) (see at least abstract: position for a vehicle is corrected by detecting landmarks on the journey route and correcting the measured vehicle position when a landmark of this kind has been identified. The landmarks are stored in a database in the vehicle with associated exact GPS positions. When a landmark is reached, the associated exact GPS position is compared with the position measured in the vehicle, whereupon the measured position is corrected. In this way, the position finding can be improved).
US 5,961,571 (“Gorr”) (see at least abstract: automatically tracking the location of a vehicle includes a visual image detector mounted on the vehicle for producing as the vehicle moves along a route, claim 1: information relating to features of scenery about said vehicle at successive locations or landmarks along said route, respectively, whereby when said vehicle retraces travel along at least a portion of said route, said digital signal processing means converts resulting said analog image signals into successive digitized second image data strips corresponding to a unique one of said first image data strips… sparse tracking means for utilizing selected ones of said plurality of first image data strips for establishing a sparse database thereof in memory, the selected image data strips representing substantially spaced apart successive locations along said route, said sparse tracking means further including landmark recognition means for comparing each of said second image data strips as they occur in real time with the ones of said first image data strips of the sparse database, respectively, for locating said vehicle in real time at the landmark or location associated with the one of said first image data strips most closely corresponding to the current said second image data strip).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANGELINA M SHUDY whose telephone number is (571)272-6757. The examiner can normally be reached M - F 10am - 6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fadey Jabr can be reached at 571-272-1516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Angelina Shudy
Primary Examiner
Art Unit 3668
/Angelina M Shudy/Primary Examiner, Art Unit 3668