DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The Amendment filed on 1/14/2026 has been entered. Claims 8-11 have been added. Claims 1-11 remain pending in the application.
Priority
Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1-7 are rejected under 35 U.S.C. 103 as being unpatentable over Li U.S. Patent Application 20230130901 in view of Yang U.S. Patent Application 20220004777, and further in view of Lee U.S. Patent Application 20200218276.
Regarding claim 1, Li discloses a map generation device comprising processing circuitry (computing unit 501)
to acquire, from at least one sensor (collection sensor and radar sensor) that detects vehicle trajectory present in a target region, information on the vehicle trajectory detected by the at least one sensor as information for map generation, and identify whether or not the information for map generation acquired includes three-dimensional information and whether or not the information for map generation acquired is information regarding a tracking target object whose motion needs to be tracked (paragraph [0024]: In S110, vehicle trajectory data is acquired, where the vehicle trajectory (tracking target) data is three-dimensional trajectory data),
to convert, at least by removing a height information from the three-dimensional information, the information for map generation identified by the processing circuitry as the information for map generation including the three-dimensional information and the information for map generation regarding the tracking target into information for map generation after two-dimensional conversion that is two-dimensional space information, to track the tracking target in a two-dimensional space on a basis of the information for map generation after two-dimensional conversion converted by the processing circuitry and extract the information for map generation after two-dimensional conversion regarding the tracking target that has moved (paragraph [0025]: In S120, the vehicle trajectory data is matched with a lane in a two-dimensional map, and vehicle trajectory data matched with the lane is determined; the two-dimensional map is a planar surface without height information, the height information is removed in the two-dimensional map),
to convert, at least by re-adding the height dimension, the information for map generation after two-dimensional conversion extracted by the processing circuitry into information for map generation after three-dimensional conversion that is three-dimensional space information, and to generate the map on a basis of the information for map generation after three-dimensional conversion converted by the processing circuitry (paragraph [0026]: In S130, a height is assigned to the lane in the two-dimensional map according to the matched vehicle trajectory data to obtain the three-dimensional map; the three-dimensional map is the two-dimensional map with height dimension).
Li discloses all the features with respect to claim 1 as outlined above. However, Li fails to disclose detects an object present in a target region, dynamic map generation for generating a dynamic map, to track the tracking target object in a two-dimensional space on a basis of the information for map generation explicitly.
Yang discloses dynamic map generation for generating a dynamic map (paragraph [0089]: in addition to static map information. The management server 20 uses information or the like received from vehicles and infrastructure facilities such as roadside communication units (RSUs), to generate, update, and store, in a storage unit, the dynamic map that prospers latest road conditions),
to convert, at least by removing a height information from the three-dimensional information, the information for dynamic map generation identified by the processing circuitry as the information for dynamic map generation including the three-dimensional information and the information for dynamic map generation after two-dimensional conversion that is two-dimensional space information (paragraph [0208]: conversion processing of the three-dimensional space region occupied by the object, to generate a projection image (u, v) obj projected on the two-dimensional image plane captured by the image acquisition unit (the camera) 111),
and to generate the dynamic map on a basis of the information for map generation (paragraph [0089]: in addition to static map information. The management server 20 uses information or the like received from vehicles and infrastructure facilities such as roadside communication units (RSUs), to generate, update, and store, in a storage unit, the dynamic map that prospers latest road conditions).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Li’s to generate dynamic map as taught by Yang, to enable more reliable object identification to achieve safe traveling of a mobile device.
Li as modified by Yang discloses all the features with respect to claim 1 as outlined above. However, Li as modified by Yang fails to disclose detecting an object present in a target region, to track the tracking target object in a two-dimensional space on a basis of the information for map generation explicitly.
Lee discloses detecting an object present in a target region (paragraph [0085]: FIG. 11, in operation 1130, the vehicle 100 or the sensing device 200 may acquire object information of at least one object identified by applying a neural network based object classification model to spatial information over time for a 3D space),
to track the tracking target object in a two-dimensional space on a basis of the information for map generation (paragraph [0070]: changes of the speed and direction of the vehicle 100 and the angle of the steering system according to a movement of the vehicle 100 may be tracked).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Li and Yang’s to track target object as taught by Lee, to collect data required for driving operations.
Regarding claim 2, Li as modified by Yang and Lee discloses the map generation device according to claim 1, wherein the processing circuitry further performs to convert the information for dynamic map generation into the information for map generation after two-dimensional conversion by deleting information indicating a height direction of the tracking target object from the information for dynamic map generation identified by the processing circuitry as the information for dynamic map generation including the three-dimensional information and the information for dynamic map generation regarding the tracking target object (Li’s paragraph [0025]: In S120, the vehicle trajectory data is matched with a lane in a two-dimensional map, and vehicle trajectory data matched with the lane is determined; Yang’s paragraph [0208]: conversion processing of the three-dimensional space region occupied by the object, to generate a projection image (u, v) obj projected on the two-dimensional image plane captured by the image acquisition unit (the camera) 111).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Li’s to generate dynamic map as taught by Yang, to enable more reliable object identification to achieve safe traveling of a mobile device; and combine Li and Yang’s to track target object as taught by Lee, to collect data required for driving operations.
Regarding claim 3, Li as modified by Yang and Lee discloses the map generation device according to claim 1, wherein the processing circuitry further performs to convert the information for map generation after two-dimensional conversion into the information for map generation after three-dimensional conversion by adding information in a height direction of the tracking target object to the information for map generation after two-dimensional conversion (Li’s paragraph [0029]: The vehicle trajectory data is used for assigning the height to the lane in the two-dimensional map to obtain data of the three-dimensional map, and the two-dimensional map is converted to the three-dimensional map).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Li’s to generate dynamic map as taught by Yang, to enable more reliable object identification to achieve safe traveling of a mobile device; and combine Li and Yang’s to track target object as taught by Lee, to collect data required for driving operations.
Regarding claim 4, Li as modified by Yang and Lee discloses the map generation device according to claim 1, wherein the processing circuitry further performs to convert the information for dynamic map generation into the information for map generation after two-dimensional conversion by deleting information indicating a height direction of the tracking target object from the information for dynamic map generation identified by the processing circuitry as the information for dynamic map generation including the three-dimensional information and the information for dynamic map generation regarding the tracking target object, and store the information indicating the height direction of the tracking target object in a storage device (Li’s paragraph [0025]: In S120, the vehicle trajectory data is matched with a lane in a two-dimensional map, and vehicle trajectory data matched with the lane is determined; Yang’s paragraph [0208]: conversion processing of the three-dimensional space region occupied by the object, to generate a projection image (u, v) obj projected on the two-dimensional image plane captured by the image acquisition unit (the camera) 111; paragraph [0089]: in addition to static map information. The management server 20 uses information or the like received from vehicles and infrastructure facilities such as roadside communication units (RSUs), to generate, update, and store, in a storage unit, the dynamic map that prospers latest road conditions), and
to convert the information for map generation after two-dimensional conversion into the information for map generation after three-dimensional conversion by adding the information indicating the height direction of the tracking target object stored by the processing circuitry to the information for map generation after two-dimensional conversion (Li’s paragraph [0029]: The vehicle trajectory data is used for assigning the height to the lane in the two-dimensional map to obtain data of the three-dimensional map, and the two-dimensional map is converted to the three-dimensional map).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Li’s to generate dynamic map as taught by Yang, to enable more reliable object identification to achieve safe traveling of a mobile device; and combine Li and Yang’s to track target object as taught by Lee, to collect data required for driving operations.
Regarding claim 5, Li as modified by Yang and Lee discloses the map generation device according to claim 4, wherein the processing circuitry further performs to cause the storage device to store, every time an update period elapses, the information indicating the height direction of the tracking target object that is deleted when the information for dynamic map generation is converted into the information for map generation after two-dimensional conversion (Yang’s paragraph [0208]: conversion processing of the three-dimensional space region occupied by the object, to generate a projection image (u, v) obj projected on the two-dimensional image plane captured by the image acquisition unit (the camera) 111; paragraph [0089]: in addition to static map information. The management server 20 uses information or the like received from vehicles and infrastructure facilities such as roadside communication units (RSUs), to generate, update, and store, in a storage unit, the dynamic map that prospers latest road conditions).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Li’s to generate dynamic map as taught by Yang, to enable more reliable object identification to achieve safe traveling of a mobile device; and combine Li and Yang’s to track target object as taught by Lee, to collect data required for driving operations.
Claim 6 recites the functions of the apparatus recited in claim 1 as medium steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 1 applies to the medium steps of claim 6.
Claim 7 recites the functions of the apparatus recited in claim 1 as apparatus steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 1 applies to the apparatus steps of claim 7.
Allowable Subject Matter
Claim 8-11 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Claim 8 is about identifying whether or not the information for dynamic map generation acquired includes the three-dimensional information is based on acquiring a first flag and determining a value of the first flag indicating whether the information for dynamic map generation is the information for dynamic map generation including the three-dimensional information, and identifying whether or not the information for dynamic map generation acquired is the information regarding the tracking target object whose motion needs to be tracked is implemented by acquiring a second flag and determining a value of the second flag indicating whether the information for dynamic map generation is information for dynamic map generation of an object that is to be tracked.
Claim 9 is about to convert, at least by re-adding the height dimension, the information for map generation after two-dimensional conversion extracted by the processing circuitry into information for map generation after three-dimensional conversion that is three-dimensional space information is based on determining that the tracking target object has moved in the meantime after converting the information for dynamic map generation identified by the processing circuitry as the information for dynamic map generation including the three-dimensional information and the information for dynamic map generation regarding the tracking target object into information for map generation after two-dimensional conversion that is two-dimensional space information.
Claim 11 is about the tracking target object is a person, and the height dimension is indicative of a height of the person.
Li 20230130901, Yang 20220004777, and Lee 20200218276 combined cannot teach these features. These limitations when read in light of the rest of the limitations in the claim and the claims to which it depends make the claim allowable subject matter.
Claim 10 depends on claim 9, are allowed base on same reason as claim 9.
Response to Arguments
Applicant's arguments filed 1/14/2026, page 8 - 11, with respect to the rejection(s) of claim(s) 1 and 6 under 103, have been fully considered but they are not persuasive. (FP 7.37)
Applicant argues on page 8-9 i. the references, as cited by the rejection and viewed overall, do not reasonably suggest ever "removing a height dimension from the three-dimensional information" as part of that conversion; and page 10 iii. since the references are believed to not reasonably suggest "removing" the height dimension, the references therefore also do not reasonably suggest "re-adding the height dimension" as claimed; and page 11 about claim 6 and claim 2 "deleting" the "information indicating a height direction".
In reply, Li’s paragraph [0025]: In S120, the vehicle trajectory data is matched with a lane in a two-dimensional map, and vehicle trajectory data matched with the lane is determined; the two-dimensional map is a planar surface without height information, the height information is removed/deleted in the two-dimensional map.
Yang’s paragraph [0208]: conversion processing of the three-dimensional space region occupied by the object, to generate a projection image (u, v) obj projected on the two-dimensional image plane captured by the image acquisition unit (the camera) 111; the two-dimensional image plane is a planar surface without height information, the height information is removed/deleted in the two-dimensional map.
The amendment for claim 1 and 6 is not significant enough to overcome previous prior art rejection, the examiner suggest applicant incorporate allowable subject matter such as claim 8, 9 or 11 to further advance the prosecution.
Applicant argues on page 9-10 ii. the projection image features from Yang are contrary to what is expressly used by Li, and so, the combination would be frustrating to Li and therefore not obvious.
In reply, both Li and Yang generate two-dimensional and three-dimensional maps, they are not contrary and are appropriate to be combined.
For example, for two-dimensional maps, Li’s paragraph [0025]: In S120, the vehicle trajectory data is matched with a lane in a two-dimensional map, and vehicle trajectory data matched with the lane is determined. Yang’s paragraph [0208]: conversion processing of the three-dimensional space region occupied by the object, to generate a projection image (u, v) obj projected on the two-dimensional image plane captured by the image acquisition unit (the camera) 111.
For three-dimensional maps, Li’s paragraph [0026]: In S130, a height is assigned to the lane in the two-dimensional map according to the matched vehicle trajectory data to obtain the three-dimensional map. Yang’s paragraph [0470]: in step S211, coordinate transformation of matching the coordinate system of the high-confidence three-dimensional object region information extracted by the high-confidence region extraction unit 124, with the coordinate system of the three-dimensional data generated by the three-dimensional analysis result generation unit 201.
Applicant argues on page 11 iv. it is not clear how the Lee reference suggests "to track the tracking target object in a two-dimensional space" when, what is cited by the rejection seems clearly in reference to a "sensed 3D space".
In reply, Lee’s paragraph [0070]: changes of the speed and direction of the vehicle 100 and the angle of the steering system according to a movement of the vehicle 100 may be tracked. When the vehicle drives on flat road like fig. 1, the speed and direction of the vehicle movement is tracked in the two-dimensional road surface.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Yi Yang whose telephone number is (571)272-9589. The examiner can normally be reached on Monday-Friday 9:00 AM-6:00 PM EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached on 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/YI YANG/
Primary Examiner, Art Unit 2616