DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 17September2024 is being considered by the examiner.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
Self-position estimation unit in claim(s) 1 and 2
Target recognition unit in claim(s) 1-4, 7, and 9
Map information acquisition unit in claim 1
Sensor point group acquisition unit in claim(s) 1 and 2
Point group matching unit in claim(s) 1 and 6-8
Target selection unit in claim(s) 2-5
Prediction unit in claim 8
Self-map generation unit in claim(s) 9-10
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
Self-position estimation unit as disclosed in [0018] of the spec is based off of information acquired by external sensors. This is being interpreted as being a function/processing module rather than an apparatus.
Target recognition unit as disclosed in [0019] recognizes targets around the host vehicle based on external environment information detected by the external environment sensor such as a camera. This is being interpreted as being a function/processing module rather than an apparatus.
Map information acquisition unit as disclosed in [0020] acquires map information including map information stored in a storage unit mounted on the host vehicle. This is being interpreted as being a function/processing module rather than an apparatus.
Sensor point group acquisition unit as disclosed in [0021] acquires sensor point groups, which are a plurality of positions around the targets recognized by the target recognition unit, from the external environment information detected by the external environment sensor. This is being interpreted as being a function/processing module rather than an apparatus.
Point group matching unit as disclosed in [0022] estimates positions of the targets on the map recognized by the target recognition unit by matching between the sensor point group acquired by the sensor point group acquisition unit and the map point group acquired by the map information acquisition unit. This is being interpreted as being a function/processing module rather than an apparatus.
Target selection unit as disclosed in [0060] calculates a provisional position 11P4 of a sensor feature from a self-position estimation result 10P3 and a sensing result 13R1. Since the target selection unit calculates and uses information acquired from external sensors, it is being interpreted as a function/processing module rather than an apparatus.
Prediction unit as disclosed in [0071] predicts a position and a speed of a different vehicle ahead on a map estimated by a point group matching unit. This is being interpreted as being a function/processing module rather than an apparatus.
Self-map generation unit as disclosed in [0088] generates a self-map from information from an odometry unit and target information from a target recognition unit. This is being interpreted as being a function/processing module rather than an apparatus.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-12 are rejected under 35 USC § 101 because the claimed invention is directed to an
abstract idea without significantly more.
Step 1
Claim(s) 1 is directed towards a machine and claim 11 is directed under a process, which is one of the statutory categories of invention.
Step 2a Prong 1
Regarding claim 1 and 11, the claim recites, in part, “estimating a self-position, recognizing targets around the host, acquiring a map, sensing position of the host, and comparing to a map”. The limitations of selecting, determining, and generating, when read in light of the specification, are mental processes capable of being performed in the human mind, which have been identified as being abstract ideas (MPEP 2106.04(a)(2)).
Claims 2-10 and 12 depend from claim 1 and 11, do not remedy any of the deficiencies of claim 1 and 11, and therefore are rejected on the same grounds as claim 1 and 11 above.
Step 2a Prong 2
This judicial exception is not integrated into a practical application. In particular, the claim only recites one additional element – using a generic computing component(s) or automations to perform both the generic computing functions in the claim(s). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Step 2b
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional limitations of using generic computer components and automations amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components and automations cannot provide an inventive concept.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim(s) 3-5 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 3 recites the limitation "the corresponding feature". There is insufficient antecedent basis for this limitation in the claim.
Claim 4 recites the limitation "the feature corresponding to the target". There is insufficient antecedent basis for this limitation in the claim.
Claim(s) 5 is/are rejected due to being dependent on a rejected base claim.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-7, and 9-10 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Shambik US 20220027642.
Shambik discloses,
Claim 1 and 11; An external environment recognition device, comprising: a self-position estimation unit that estimates a self-position(130) which is a position of a host vehicle on a map([0089] position sensor 130 may include a GPS receiver, such receivers can determine a user position and velocity by processing signals broadcasted by global positioning system satellites) stored in a map database(160) based on external environment information acquired by an external environment sensor([0275] A landmark may be visible within a field of view of a camera (e.g., camera 122) and other sensors or systems (e.g., GPS system) may also provide certain identification information of the landmark (e.g., position of landmark)) mounted on the host vehicle([0097] the image capture devices may be located on or in one or both of the side mirrors of vehicle 200, on the roof of vehicle 200, on the hood of vehicle 200, on the trunk of vehicle 200, on the sides of vehicle 200, mounted on, positioned behind, or positioned in front of any of the windows of vehicle 200, and mounted in or near light figures on the front and/or back of vehicle 200, etc); a target recognition unit(110 and 140) that recognizes targets around the host vehicle based on the external environment information([0156] Processing unit 110 may filter the set of candidate objects to exclude certain candidates (e.g., irrelevant or less relevant objects) based on classification criteria. Such criteria may be derived from various properties associated with object types stored in a database (e.g., a database stored in memory 140)); a map information acquisition unit(172) that acquires map information([0099] Via wireless transceiver 172, system 100 may receive, for example, periodic or on demand updates to data stored in map database 160, memory 140, and/or memory 150), which includes a map point group which is a set of feature points on the map and feature information including information on a position and a type of a feature([0198] Sparse map 800 may be stored in a memory, such as memory 140 or 150, [0203] location information may be included in sparse map 800 for various map elements, including, for example, landmark locations, road profile locations, etc); a sensor point group acquisition unit(402) that acquires a sensor point group around the targets recognized by the target recognition unit from the target recognition unit([0147] monocular image analysis module 402 may include instructions for detecting a set of features within the set of images, such as lane markings, vehicles, pedestrians, road signs, highway exit ramps, traffic lights, hazardous objects, and any other feature associated with an environment of a vehicle); and a point group matching unit(1230) that estimates positions of the targets on the map by matching between the sensor point group acquired by the sensor point group acquisition unit and the map point group([0282] Server 1230 may identify landmarks for the sparse map by identifying unique matches between landmarks 1501, 1503, and 1505 of drive 1510(sensor point group) and landmarks 1507 and 1509 of drive 1520(map point group). Such a matching algorithm may result in identification of landmarks 1511, 1513, and 1515, and vehicle 200 which contains 402, the map points from 800, and camera 122 communicate with 1230).
Claim 2 and 12; Further comprising: a target selection unit(190) that selects the target satisfying a predetermined condition among the targets recognized by the target recognition unit(190 is part of 110 and [0156] Processing unit 110 may filter the set of candidate objects to exclude certain candidates (e.g., irrelevant or less relevant objects) based on classification criteria. Such criteria may be derived from various properties associated with object types stored in a database (e.g., a database stored in memory 140)) based on the self-position estimated by the self-position estimation unit(Fig. 1 shows 130 feeds data into 110 which 190 is in), a recognition result of the targets recognized by the target recognition unit(Fig. 1 shows 120 feeds data into 110 which 190 is in), and the feature information(140 and 150 contain the sparse map which contain feature information as disclosed in [0190] and Fig. 1 shows 140 and 150 feed data into 110), wherein the sensor point group acquisition unit acquires a sensor point group around the selected target from the target recognition unit([0147] monocular image analysis module 402 may include instructions for detecting a set of features within the set of images, such as lane markings, vehicles, pedestrians, road signs, highway exit ramps, traffic lights, hazardous objects, and any other feature associated with an environment of a vehicle).
Claim 3; Wherein the target selection unit selects the target in a case where there is the corresponding feature around the targets recognized by the target recognition unit([0108] The first image capture device 122 may acquire a plurality of first images relative to a scene associated with the vehicle 200) while referring to an information table in which a type of the target and the type of the feature are associated with each other([0421] 3400A is an image that may be captured by an image capture device of a host vehicle, [0435] database 3600 may include objects detected in image 3400A).
Claim 4; Wherein the target selection unit estimates a provisional position of the target on the map([0153] 110 may estimate camera motion between consecutive image frames and calculate the disparities in pixels between the frames to construct a 3D-map of the road. Processing unit 110 may then use the 3D-map to detect the road surface, as well as hazards existing above the road surface) by using the self-position and the recognition result of the targets recognized by the target recognition unit(Fig. 1 shows 110 is fed data from both 120 and 130), and selects the target in a case where a distance between the provisional position and a position of the feature corresponding to the target in the information table is less than or equal to a threshold value([0162] 110 performs multi-frame analysis, the set of measurements constructed at step 554 may become more reliable and associated with an increasingly higher confidence level, and [0436] discloses that 3640 may include a region associated with the object which would update as the confidence level increases).
Claim 5; Wherein the information table retains a threshold value used by the target selection unit for each association of the type of the target with the type of the feature(Fig. 36), and the target selection unit changes the threshold value retained in the information table based on at least one of the recognition result of the targets, weather, and brightness([0332] A different target trajectory may be generated for different road conditions (e.g., wet, snowy, icy, dry, etc.), vehicle conditions (e.g., tire condition or estimated tire condition, brake condition or estimated brake condition, amount of fuel remaining, etc.) or environmental factors (e.g., time of day, visibility, weather, etc.) and [0436] discloses that other information that may be included in database 3600 may include a description of the detected object, a time and/or date stamp, information about the image (e.g., an image ID, etc.), information about the vehicle (e.g., a vehicle ID, etc.), or any other information that may be relevant for analysis or navigation purposes).
Claim 6; Wherein the point group matching unit determines whether or not point group matching has succeeded, and outputs a position of the target on the map estimated by the matching between the sensor point group and the map point group in a case where the point group matching has succeeded([0283] Server 1230 may accept potential landmarks for use on the sparse map when a ratio of images in which the landmark does appear to images in which the landmark does not appear exceeds a threshold).
Claim 7; Wherein the point group matching unit determines whether or not point group matching has succeeded, and outputs a position of the target on the map calculated by using the self-position and a recognition result of the targets recognized by the target recognition unit in a case where the point group matching has failed([0283] Server 1230 may also reject potential landmarks when a ratio of images in which the landmark does not appear to images in which the landmark does appear exceeds a threshold. Further, [0261] when the new data indicates that a previously recognized landmark at a specific location no longer exists, or is replaced by another landmark (i.e. no match), the server may determine that the new data should trigger an update to the model).
Claim 9; Further comprising a self-map generation unit that generates the map by using relative position and posture of the host vehicle estimated by odometry, a recognition result of the targets recognized by the target recognition unit, and the sensor point group([0202] Sparse map 800 is generated based on data collected from one or more vehicles which have position sensors 130 that estimate the position of each vehicle, [0269] vehicle transmitting navigation information from inertial sensors (odometry), results of targets from their processors 110, data from monocular image analysis module 402, and [0200] discloses the data can be transmitted/stored/accessed to/from a remote server 1230).
Claim 10; Wherein the self-map generation unit stores the sensor point group as the map in the map database around the feature described in an information table in which a type of the target and the type of the feature are associated with each other([0299] Server 1230 stores data for sparse map 800 using data from monocular image analysis module 402, [0436] Data table 3600 is stored inside 150 or 160 where 800 is also stored which means 800 has access to read/write data table 3600).
Claim(s) 1 and 8 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Eldar US 20210064057.
Eldar discloses,
Claim 1 and 11; An external environment recognition device, comprising: a self-position estimation unit that estimates a self-position(130) which is a position of a host vehicle on a map([0118] position sensor 130 may include a GPS receiver, such receivers can determine a user position and velocity by processing signals broadcasted by global positioning system satellites) stored in a map database(160) based on external environment information acquired by an external environment sensor([0306] A landmark may be visible within a field of view of a camera (e.g., camera 122) and other sensors or systems (e.g., GPS system) may also provide certain identification information of the landmark (e.g., position of landmark)) mounted on the host vehicle([0126] the image capture devices may be located on or in one or both of the side mirrors of vehicle 200, on the roof of vehicle 200, on the hood of vehicle 200, on the trunk of vehicle 200, on the sides of vehicle 200, mounted on, positioned behind, or positioned in front of any of the windows of vehicle 200, and mounted in or near light figures on the front and/or back of vehicle 200, etc); a target recognition unit(110 and 140) that recognizes targets around the host vehicle based on the external environment information([0185] Processing unit 110 may filter the set of candidate objects to exclude certain candidates (e.g., irrelevant or less relevant objects) based on classification criteria. Such criteria may be derived from various properties associated with object types stored in a database (e.g., a database stored in memory 140)); a map information acquisition unit(172) that acquires map information([0128] Via wireless transceiver 172, system 100 may receive, for example, periodic or on demand updates to data stored in map database 160, memory 140, and/or memory 150), which includes a map point group which is a set of feature points on the map and feature information including information on a position and a type of a feature([0227] Sparse map 800 may be stored in a memory, such as memory 140 or 150, [0233] location information may be included in sparse map 800 for various map elements, including, for example, landmark locations, road profile locations, etc); a sensor point group acquisition unit(402) that acquires a sensor point group around the targets recognized by the target recognition unit from the target recognition unit([0176] monocular image analysis module 402 may include instructions for detecting a set of features within the set of images, such as lane markings, vehicles, pedestrians, road signs, highway exit ramps, traffic lights, hazardous objects, and any other feature associated with an environment of a vehicle); and a point group matching unit(1230) that estimates positions of the targets on the map by matching between the sensor point group acquired by the sensor point group acquisition unit and the map point group([0313] Server 1230 may identify landmarks for the sparse map by identifying unique matches between landmarks 1501, 1503, and 1505 of drive 1510(sensor point group) and landmarks 1507 and 1509 of drive 1520(map point group). Such a matching algorithm may result in identification of landmarks 1511, 1513, and 1515, and vehicle 200 which contains 402, the map points from 800, and camera 122 communicate with 1230).
Claim 8; Further comprising: a prediction unit that predicts at least one of a trajectory, a speed, and an intention of the target based on a position of the target on the map estimated by the point group matching unit and a position of the feature on the map([0512] A prediction unit is a function that is performed by 1230, server 1230 may be configured to monitor vehicles traveling along a roadway 3660 and 3616 and to predict trajectories of vehicles to ensure that vehicles do not come in close proximity of one another, Fig. 36A also shows that a map is used for prediction purposes as well).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to John Merino whose telephone number is (703)756-4721. The examiner can normally be reached Mon - Thu 8am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin Piateski can be reached at (571) 270-7429. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/John C Merino/Patent Examiner, Art Unit 3669
/Erin M Piateski/Supervisory Patent Examiner, Art Unit 3669