Prosecution Insights
Last updated: April 19, 2026
Application No. 18/432,809

Data Structure for Efficient Training of Semantic Segmentation Models

Non-Final OA §101§103
Filed
Feb 05, 2024
Examiner
ROSARIO, DENNIS
Art Unit
2676
Tech Center
2600 — Communications
Assignee
Aptiv Technologies AG
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
3y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
385 granted / 557 resolved
+7.1% vs TC avg
Strong +29% interview lift
Without
With
+28.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
34 currently pending
Career history
591
Total Applications
across all art units

Statute-Specific Performance

§101
16.5%
-23.5% vs TC avg
§103
40.3%
+0.3% vs TC avg
§102
24.6%
-15.4% vs TC avg
§112
13.6%
-26.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 557 resolved cases

Office Action

§101 §103
DETAILED ACTION Claims 1-15 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim(s) 1 and 13 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over DING et al. (CN 109798903 A) with Google Patents translation in view of LI et al. (CN 114782914 A) with SEARCH machine translation: Claim(s) 2,3,4 is/are rejected under 35 U.S.C. 103 as being unpatentable over DING et al. (CN 109798903 A) with Google Patents translation in view of LI et al. (CN 114782914 A) with SEARCH machine translation as applied in claims 1 and 13 and 14 further in view of APUY et al. (CN 113892129 A) with SEARCH machine translation: Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over DING et al. (CN 109798903 A) with Google Patents translation in view of LI et al. (CN 114782914 A) with SEARCH machine translation as applied in claims 1 and 13 and 14 further in view of Mielenz et al. (US 2019/0137286 A1): Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over DING et al. (CN 109798903 A) with Google Patents translation in view of LI et al. (CN 114782914 A) with SEARCH machine translation as applied in claims 1 and 13 and 14 further in view of Mielenz et al. (US 2019/0137286 A1) as applied in claim 5 further in view of Berger et al. (US 10,839,530 B1): Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over DING et al. (CN 109798903 A) with Google Patents translation in view of LI et al. (CN 114782914 A) with SEARCH machine translation as applied in claims 1 and 13 and 14 further in view of Mielenz et al. (US 2019/0137286 A1) as applied in claim 5 further in view of Behrendt (US 2020/0349365 A1): Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over DING et al. (CN 109798903 A) with Google Patents translation in view of LI et al. (CN 114782914 A) with SEARCH machine translation as applied in claims 1 and 13 and 14 further in view of GUO et al. (CN 110148217 A) with SEARCH machine translation: Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over DING et al. (CN 109798903 A) with Google Patents translation in view of LI et al. (CN 114782914 A) with SEARCH machine translation as applied in claims 1 and 13 and 14 further in view of GUO et al. (CN 110148217 A) with SEARCH machine translation as applied in claim 8 further in view of Huang et al. (US 2016/0012646 A1): Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over DING et al. (CN 109798903 A) with Google Patents translation in view of LI et al. (CN 114782914 A) with SEARCH machine translation as applied in claims 1 and 13 and 14 further in view of GUO et al. (CN 110148217 A) with SEARCH machine translation as applied in claim 8 further in view of Huang et al. (US 2016/0012646 A1) as applied in claim 9 further in view of SAVKIN (DE 10 2020 110 243 A1) with SEARCH machine translation Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over DING et al. (CN 109798903 A) with Google Patents translation in view of LI et al. (CN 114782914 A) with SEARCH machine translation as applied in claims 1 and 13 and 14 further in view of CHEN et al. (US 2014/0233790 A1): Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over DING et al. (CN 109798903 A) with Google Patents translation in view of LI et al. (CN 114782914 A) with SEARCH machine translation as applied in claims 1 and 13 and 14 further in view of HAN et al. (CN 112560774 A) with SEARCH machine translation: Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over DING et al. (CN 109798903 A) with Google Patents translation in view of LI et al. (CN 114782914 A) with SEARCH machine translation as applied in claims 1 and 13 and 14 further in view of SONG et al. (US 2017/0344015 A1): PNG media_image1.png 1170 801 media_image1.png Greyscale Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-15 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step o: establish broadest reasonable interpretation as indicated in the footnotes through-out this action Step 1: Claim 1 a process; claim 13 a manufacture; claim 14 a machine: PNG media_image2.png 544 153 media_image2.png Greyscale Step 2A, prong 1: The claim(s) recite(s) a judicial exception: representative claim 1: “obtaining a first point cloud… joining the first and second point clouds… joining the first and second point clouds… creating a representation… extracting from the representation a semantic map and one or more elevation maps”: 1. A computer-implemented method for creating a data sample for training semantic segmentation models usable in a vehicle assistance system1, the method comprising: obtaining a first point cloud2 representing a surrounding of a vehicle at a first point in time and a second point cloud representing the surrounding of the vehicle at a second point in time; joining the first and second point clouds to obtain a global point cloud representing the surrounding of the vehicle over a duration of the first point in time and the second point in time; creating a representation of the surrounding based on the global point cloud; extracting from the representation a semantic map and one or more elevation maps; and providing the semantic map and the one or more elevation maps as the data sample. PNG media_image3.png 588 748 media_image3.png Greyscale Step 2A, prong 2: This judicial exception is not integrated into a practical application because the additional elements such as the “vehicle” “cloud” “point” “map” “sample” do not improve the functioning of a computer or the computer field in view of applicant’s disclosure [0002]: PNG media_image4.png 137 737 media_image4.png Greyscale Step 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because each additional element, such as the “medium” “memory” “processor” “vehicle” “maps” “sample”, considered individually or with the mental process adheres to conventional practices as indicated in applicant’s specification’s background3 [0003][0004][0005]: PNG media_image5.png 1208 751 media_image5.png Greyscale PNG media_image6.png 2035 1018 media_image6.png Greyscale Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1 and 13 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over DING et al. (CN 109798903 A) with Google Patents translation in view of LI et al. (CN 114782914 A) with SEARCH machine translation: PNG media_image7.png 1140 357 media_image7.png Greyscale Re 1., Ding discloses A computer-implemented method for creating (or “generating’, pg. 14) a data (via “precision”, pg. 9) sample [[for training semantic segmentation models usable in a vehicle assistance system]]4, the (“computer”, pg. 5) method comprising: obtaining (“at different times”, pg. 12) a first point cloud (“by a series5 of cloud number” and “by…cloud map number”, pg. 6) representing a surrounding (resulting in a surrounded-outermost-boundary via “vehicle periphery”6, pg. 6) of a vehicle at a first (temporal order or succession) point in time (via “by a series7 of cloud number” and “by…cloud map number”, pg. 6) and a second point cloud (“by a series of cloud number” and “by…cloud map number”, pg. 6) representing the surrounding (resulting in a surrounded-outermost-boundary via “vehicle periphery”, pg. 6) of the vehicle at a second (temporal order or succession) point in time (via “by a series8 of cloud number” and “by…cloud map number”, pg. 6); joining (“at different times”, pg. 12) the first (“by a series of cloud number” and “by…cloud map number”, pg. 6) and second (“by a series of cloud number” and “by…cloud map number”, pg. 6) point clouds (“in conjunction with the location information and posture information of vehicle in the three dimensional point cloud”, pg. 6) to obtain a global (“world”, pg. 6) point cloud representing the surrounding (via “vehicle periphery”) of the vehicle over a duration (“via at different times”, pg. 12) of the first point in time and the second point in time (via “by a series9 of cloud number” and “by…cloud map number”, pg. 6); creating a representation (resulting in a “generated” “semantic map”10, pg. 6) of the surrounding (via “vehicle periphery”) based1112 (resulting in a “generated”1314 “semantic map”15, pg. 6) on16 (via the arrow connections of fig. 2, reproduced below) the global (“world”, pg. 6) point cloud; extracting (via “extract”, pg, 7) from the representation17 (resulting in a “generated” “semantic map”18, pg. 6) (A) a semantic map (at the “datum”19 level: “extract…semantic map datum”, pg. 7) and20 (B) one (comprised by “elevation”-“map datum”, pg. 7) or (C) more elevation maps21; and providing (via “display”, pg. 12, and “Display module 407, for showing the road information”, pg. 15) (A) the semantic map (resulting in a “generated” “semantic map”22, pg. 6) and23 the (B) one or (C) more elevation maps as the data (via “precision”, pg. 9) sample ( PNG media_image8.png 840 1135 media_image8.png Greyscale DING does not teach the difference of claim 1 of: “a…sample24…as25 the…sample26”. LI teaches the difference of claim 1: a…sample (or a data sample27 image serving as a data sample for examination or study “through machine learning”, pg. 6, last txt blk, via “sample image”, pg. 2, 8th txt blk, “as…data”, pg. 6, last txt blk: data sample image)… as28 the…sample (via “the…semantic map is29…the sample30 image”, pg. 2, 8th txt blk, serving as a camera recorded/created data sample for examination or study “through machine learning”, pg. 6, last txt blk: PNG media_image9.png 858 936 media_image9.png Greyscale ). Since DING teaches automated driving, one of skill in the art of automated driving can make DING’s be as LI’s predictably recognizing the change “improving the environment sensing range of the automatic driving vehicle, ensuring the automatic driving vehicle to fully use the semantic element around the current position.” LI, pg. 4, 1st txt blk: PNG media_image10.png 2135 1135 media_image10.png Greyscale Claim 13 is rejected like claim 1: 13. A non-transitory computer-readable medium31 (or “computer equipment and storage medium”, DING, pg. 1) comprising instructions including: obtaining a first point cloud representing a surrounding of a vehicle at a first point in time and a second point cloud representing the surrounding of the vehicle at a second point in time; joining the first and second point clouds to obtain a global point cloud representing the surrounding of the vehicle over a duration of the first point in time and the second point in time; creating a representation of the surrounding based on the global point cloud; extracting from the representation (A) a semantic map and (B) one or (C) more elevation maps; and providing the (A) semantic map and the (B) one or (C) more elevation maps as a data sample (via the rejection of claim 1, reproduced in part below: Re 1., Ding discloses A computer-implemented method for creating (or “generating’, pg. 14) a data (via “precision”, pg. 9) sample [[for training semantic segmentation models usable in a vehicle assistance system]], the (“computer”, pg. 5) method comprising: obtaining (“at different times”, pg. 12) a first point cloud (“by a series of cloud number” and “by…cloud map number”, pg. 6) representing a surrounding (via “vehicle periphery”, pg. 6) of a vehicle at a first (temporal order or succession) point in time (via “by a series of cloud number” and “by…cloud map number”, pg. 6) and a second point cloud (“by a series of cloud number” and “by…cloud map number”, pg. 6) representing the surrounding (via “vehicle periphery”) of the vehicle at a second (temporal order or succession) point in time (via “by a series of cloud number” and “by…cloud map number”, pg. 6); joining (“at different times”, pg. 12) the first (“by a series of cloud number” and “by…cloud map number”, pg. 6) and second (“by a series of cloud number” and “by…cloud map number”, pg. 6) point clouds (“in conjunction with the location information and posture information of vehicle in the three dimensional point cloud”, pg. 6) to obtain a global (“world”, pg. 6) point cloud representing the surrounding (via “vehicle periphery”) of the vehicle over a duration (“via at different times”, pg. 12) of the first point in time and the second point in time (via “by a series of cloud number” and “by…cloud map number”, pg. 6); creating a representation (resulting in a “generated” “semantic map”, pg. 6) of the surrounding (via “vehicle periphery”) based (resulting in a “generated” “semantic map”, pg. 6) on (via the arrow connections of fig. 2, reproduced below) the global (“world”, pg. 6) point cloud; extracting (via “extract”, pg, 7) from the representation (resulting in a “generated” “semantic map”, pg. 6) (A) a semantic map (at the “datum” level: “extract…semantic map datum”, pg. 7) and (B) one (comprised by “elevation”-“map datum”, pg. 7) or (C) more elevation maps; and providing (via “display”, pg. 12, and “Display module 407, for showing the road information”, pg. 15) (A) the semantic map (resulting in a “generated” “semantic map”, pg. 6) and the (B) one or (C) more elevation maps as the data (via “precision”, pg. 9) sample. Claim 14 is rejected like claims 1 and 13: 14. An apparatus (“(system)”, DING, pg. 9, 8h txt blk) comprising: memory configured to store instructions; and at least one processor configured to execute the instructions, wherein the instructions include: obtaining a first point cloud representing a surrounding of a vehicle at a first point in time and a second point cloud representing the surrounding of the vehicle at a second point in time; joining the first and second point clouds to obtain a global point cloud representing the surrounding of the vehicle over a duration of the first point in time and the second point in time; creating a representation of the surrounding based on the global point cloud; extracting from the representation (A) a semantic map and (B) one or (C) more elevation maps; and providing the (A) semantic map and the (B) one or (C) more elevation maps as a data sample. Claim(s) 2,3,4 is/are rejected under 35 U.S.C. 103 as being unpatentable over DING et al. (CN 109798903 A) with Google Patents translation in view of LI et al. (CN 114782914 A) with SEARCH machine translation as applied in claims 1 and 13 and 14 further in view of APUY et al. (CN 113892129 A) with SEARCH machine translation: PNG media_image11.png 1140 546 media_image11.png Greyscale Re 2., DING of the combination (illustrated above) of DING,LI teaches The method of claim 1 wherein extracting (A) the semantic map and the (B) one or (C) more elevation maps includes at least one of32: (D) capturing a first view (via “display”, pg. 12, and “Display module 407, for showing the road information”, pg. 15) of the representation (resulting in a “generated” “semantic map”, pg. 6) indicating (via “display”, pg. 12, and “Display module 407, for showing the road information”, pg. 15) elevation information from above the vehicle to create a first elevation map (comprising an “elevation”-“map datum”, pg. 7); (E) capturing a second view (via “display”, pg. 12, and “Display module 407, for showing the road information”, pg. 15) of the representation (resulting in a “generated” “semantic map”, pg. 6) indicating (via “display”, pg. 12, and “Display module 407, for showing the road information”, pg. 15) elevation information from below the vehicle to create a second elevation map (comprising an “elevation”-“map datum”, pg. 7); or (F) capturing a third view (via “display”, pg. 12, and “Display module 407, for showing the road information”, pg. 15) of the representation (resulting in a “generated” “semantic map”, pg. 6) indicating (via “display”, pg. 12, and “Display module 407, for showing the road information”, pg. 15) semantic information (via “form new semantic map Data”, pg. 12) of the surrounding to create the semantic map. DING of the combination of DING,LI does not teach the difference of claim 2 of: (D) capturing…first…from above…create…first… (E) capturing…second…from below…create…second… (F) capturing…third. APUY teaches the difference of claim 2 of: (D) capturing…first…from above…create…first… (E) capturing…second…from below…create…second… (F) capturing…third (via “capture…a third view”, pg. 12, 3rd txt blk)33. Since DING of the combination of DING,LI teaches point clouds and maps, one of skill in the art of point clouds and maps can make DING’s of the combination of DING,LI be as APUY’s predictably recognizing the change “to improve the presentation of map-related data and/or images”, APUY, pg. 24, 3rd txt blk. Re 3., the combination of DING,LI,APUY teaches The method of claim 2 wherein (D) the elevation information from above the vehicle includes distance information of objects within the representation relative from above the vehicle34. Re 4., the combination of DING,LI,APUY teaches The method of claim 2 wherein (E) the elevation information from below the vehicle includes distance information of objects within the representation relative from below the vehicle35. Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over DING et al. (CN 109798903 A) with Google Patents translation in view of LI et al. (CN 114782914 A) with SEARCH machine translation as applied in claims 1 and 13 and 14 further in view of Mielenz et al. (US 2019/0137286 A1): PNG media_image12.png 1140 546 media_image12.png Greyscale Re 5.,DING of the combination (illustrated above) of DING,LI teaches The method of claim 1 further comprising: determining a first (symbol) plurality (resulting in “different zones”, pg. 2, as clustered and divided) of (“described”36, pg. 2, or) labeled (“cloud feature” pg. 2) points (“each”37, pg. 2) associated with static (via “fixed region”, pg. 9) objects within the surrounding of the vehicle at the first point in time in the first point cloud; and determining a third (symbol) plurality (resulting in “different zones”, pg. 2, as clustered and divided) of (“described”38, pg. 2, or) labeled points (“cloud”, pg. 10) associated with static (via “fixed region”, pg. 9) objects within the surrounding of the vehicle at the second point in time in the second point cloud, wherein joining the first point cloud and the second point cloud includes joining (via “in conjunction”, pg. 6) the first plurality of labeled points and the third plurality of labeled points. DING of the combination of DING,LI does not teach the difference of claim 5 of: “associated with…objects … associated with…objects”. Mielenz teaches the difference of claim 5: (“a list of positions” [0034]) associated with…objects … (“positions”) associated (via “of”39) with…(“static” [0034]) objects. Since DING of the combination of DING,LI teaches a fixed region, one of skill in the art of fixed regions can make DING’s fixed region of the combination of DING,LI be as Mielenz’s landmarks predictably recognizing the change “to improve the accuracy of localization map 115”, Mielenz [0033], thus improving the maps of DING of the combination of DING,LI. Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over DING et al. (CN 109798903 A) with Google Patents translation in view of LI et al. (CN 114782914 A) with SEARCH machine translation as applied in claims 1 and 13 and 14 further in view of Mielenz et al. (US 2019/0137286 A1) as applied in claim 5 further in view of Berger et al. (US 10,839,530 B1): PNG media_image13.png 1140 663 media_image13.png Greyscale Re 6., DING of the combination (illustrated above) of DING,LI,Mielenz teaches The method of claim 5 wherein at least one of40 (A) determining the first plurality of labeled points or (B) determining the third plurality of labeled points within at least one of the (C) first or (D) second point clouds (above limitations previously mapped in claim 5) includes: classifying each point (“each”, pg. 2) of at least one of (C) the first point cloud or (D) the second point cloud as (E) static or (F) dynamic; and adding each (“feature”) point (“each”, p 2) classified as static to at least one of (A) the first or (B) the third plurality of labeled (element) points. DING of the combination of DING,LI,Mielenz does not teach the difference of claim 6 of “classifying…as static or dynamic… adding…classified as static to at least one of”. Berger teaches the difference of claim 6 of: classifying…as static or dynamic (such that “the point cloud 102 may include static/moving labels that indicate whether a point reflects a static object or a moving object”, c.5,ll.60-65:fig. 1:102: “POINT CLOUD”) … adding (via an increasing total or sum)…classified (via a set) as static (“before the point cloud is accumulated41”, c.3,ll. 39-42: fig. 20: 2010: “IDENTIFY SUBSET42 OF POINTS AS MOVING POINTS BASED ON PREDICTION”) to at least one of (“the current point cloud to obtain an updated point cloud”, c. 29,ll. 9-11: fig. 20:2030: “ACCUMULATE POINT CLOUD”). Since Mielenz of the combination of DING,LI,Mielenz teaches a static object, one of skill in the art of point clouds can make Mielenz’s of the combination of DING,LI,Mielenz be as Berger’s predictably recognizing the change to “lead to more complete geometry on static objects”, Berger, c. 3,ll. 30,31, and thus more complete landmarks assisting driving a vehicle. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over DING et al. (CN 109798903 A) with Google Patents translation in view of LI et al. (CN 114782914 A) with SEARCH machine translation as applied in claims 1 and 13 and 14 further in view of Mielenz et al. (US 2019/0137286 A1) as applied in claim 5 further in view of Behrendt (US 2020/0349365 A1): PNG media_image14.png 1140 704 media_image14.png Greyscale Re 7., DING of the combination (illustrated above) of DING,LI,Mielenz teaches The method of claim 5 wherein at least one of (A) determining the first plurality of labeled points or (B) determining the third plurality of labeled points includes: generating bounding box (“shape”, pg. 9) annotations for the static objects associated with at least one of the (A) first or (B) third plurality of labeled points. DING of the combination of DING,LI,Mielenz does not teach the difference of claim 7 of “annotations”. Behrendt teaches the difference of claim 7 of: (“3D bounding box” [0027] last S) annotations. Since DING of the combination of DING,LI,Mielenz teaches “Artificial intelligence” (DING, pg. 1) and a bounding box, one of skill in the art of artificial intelligence and bounding boxes can make DING’s of the combination of DING,LI,Mielenz be as Behrendt’s predictably recognizing the change “to provide…refinement43 to” (Behrendt [0041] 1st and last Ss) the artificial intelligence (neural network) thus improving the artificial intelligence determined bounded box shape. Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over DING et al. (CN 109798903 A) with Google Patents translation in view of LI et al. (CN 114782914 A) with SEARCH machine translation as applied in claims 1 and 13 and 14 further in view of GUO et al. (CN 110148217 A) with SEARCH machine translation: PNG media_image15.png 1140 704 media_image15.png Greyscale PNG media_image16.png 2135 1135 media_image16.png Greyscale Re 8., LI of the combination (illustrated above) of DING,LI teaches The method of claim 1 wherein creating the representation includes reconstructing (via a “three-dimensional reconstruction model”, pg. 5, 5th txt blk) a surface including a plurality of vertices from the global point cloud. LI of the combination of DING,LI does not teach the difference of claim 8 of: “a surface including a plurality of vertices”. GUO teaches the difference of claim 8 of:a surface including a plurality of vertices (“after gridding reconstruction”, pg. 7,11th txt blk). Since LI of the combination (illustrated above) of DING,LI teaches reconstruction, one of skill in the art of reconstruction can make LI’s of the combination of DING,LI be as GUO’s predictably recognizing the change having “the following advantages: it can effectively dividing the interested object, and can realize separate modeling of object of interest, so object reconstruction of interest does not contain redundant scene information, such as a floor, background or other attachments and so on; the other dynamic modelling, it only needs one 3D sensor and reduces the cost of dynamic modeling, and simple operation, using multi-view high-precision texture mapping improves the model resolution.”, GUO, pg. 4, 6th txt blk. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over DING et al. (CN 109798903 A) with Google Patents translation in view of LI et al. (CN 114782914 A) with SEARCH machine translation as applied in claims 1 and 13 and 14 further in view of GUO et al. (CN 110148217 A) with SEARCH machine translation as applied in claim 8 further in view of Huang et al. (US 2016/0012646 A1): PNG media_image17.png 1140 704 media_image17.png Greyscale Re 944., GUO of the combination of DING,LI,GUO teaches The method of claim 8 further comprising: determining (via “is greater than”, pg. 7,6th txt blk) for45 each vertex (“after gridding reconstruction”, pg. 7,11th txt blk) of the plurality of vertices of the surface a predefined number (via “is greater than”, pg. 7,6th txt blk) of reference points from the global point cloud; determining (resulting in an “obtained” “3D model”, pg. 2, 8th txt blk) a label (“as objects of interest”, pg. 2, 11 txt blk) for each reference point of the predefined number of reference points; and labeling46 (ultimately resulting in a “labeled” “pixel point”, pg. 2, 12th txt blk) each vertex (“of each patch”, pg. 8, 11th txt blk, resulting in an “obtained” “3D model”, pg. 2, 8th txt blk, wherein each block-patch is identical in all essentials to the vertices via “each patch corresponding47 to vertices”, pg. 3, 11th txt blk) of the plurality of vertices according48 to the labels of the respective predefined number of reference points. GUO of the combination of DING,LI,GUO does not teach the difference of claim 9 of: “of reference points… for each reference point…of reference points… of reference points”. Huang teaches the difference of claim 9 of: (“The system may mesh data from multiple sensors” [0071]: fig. 2J: 1st Sensor; 2nd Sensor; N Sensor) of reference points… for each reference point (i.e., fig. 2J: “Kinnect”)…of reference (“view” [0071] last S) points… (make use) of (via “have”49) reference points (via “based on reference points” “overlapping regions” [0101], last S). Since GUO of the combination of DING,LI,GUO teaches reconstruction, one of skill in the art pf reconstruction can make GUO’s of the combination of DING,LI,GUO be as Hunag’s predictably recognizing the change to “accurately” “generate” an “accurate” “3D model”, Huang [0076] penult S. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over DING et al. (CN 109798903 A) with Google Patents translation in view of LI et al. (CN 114782914 A) with SEARCH machine translation as applied in claims 1 and 13 and 14 further in view of GUO et al. (CN 110148217 A) with SEARCH machine translation as applied in claim 8 further in view of Huang et al. (US 2016/0012646 A1) as applied in claim 9 further in view of SAVKIN (DE 10 2020 110 243 A1) with SEARCH machine translation: PNG media_image18.png 1140 801 media_image18.png Greyscale Re 10.50, GUO of the combination (illustrated above) of DING,LI,GUO,Huang teaches The method of claim 9 wherein labeling a vertex of the plurality of vertices according to the labels of the respective predefined number of reference points includes: determining (resulting in an “obtained” “3D model”, pg. 2, 8th txt blk) a label (“as objects of interest”, pg. 2, 11 txt blk) of the vertex based on a label distribution within the respective predefined number of reference points, wherein each label of the label distribution is associated with a weight factor. GUO of the combination of DING,LI,GUO,Huang does not teach the difference of claim 10 of: “based on a label distribution… the label distribution is associated with a weight factor”. SAVKIN teaches the difference of claim 10 of: (“follow51 the common probability distribution”, pg. 10, 6th txt blk: a copy thereof) based on a label distribution (via [0078]: PNG media_image19.png 396 940 media_image19.png Greyscale … the label distribution (y) is associated (via equations) with a weight factor (“λ”, pg. 11,via [0084][0086]: PNG media_image20.png 438 953 media_image20.png Greyscale ). Since GUO of the combination (illustrated above) of DING,LI,GUO,Huang teaches a label, one of skill in the art of labels can make GUO’s of the combination of DING,LI,GUO,Huan be as SAVKIN’s predictably recognizing the change having accurate distribution labels, y, via said “probability distribution”52, SAVKIN, pg. 10, 6th txt blk. Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over DING et al. (CN 109798903 A) with Google Patents translation in view of LI et al. (CN 114782914 A) with SEARCH machine translation as applied in claims 1 and 13 and 14 further in view of CHEN et al. (US 2014/0233790 A1): PNG media_image21.png 1140 801 media_image21.png Greyscale Re 11., DING of the combination (illustrated above) of DING,LI teaches The method of claim 1 wherein: joining53 the first point cloud and the second point cloud (see rejection claim 1) includes estimating an ego motion of the vehicle within the surround between the first point in time and the second point in time (see rejection claim 1); and joining54 the first point cloud and the second point cloud (see rejection claim 1) is based5556 on57 the estimated ego motion. DING of the combination (illustrated above) of DING,LI does not teach the difference of claim 11 of: “estimating an ego motion… based on the estimated ego motion”. CHEN teaches the difference of claim 11: estimating an ego motion (“of machine 110” [0017]: fig. 1:truck)… based on the estimated ego motion (“controller 150 may control movement of machine 110”, [0021]: fig. 1: truck 110 with computer 150 inside truck 110). Since DING of the combination (illustrated above) of DING,LI teaches a vehicle with point clouds, one of skill in the art of vehicles and pointclouds can make DING’s of the combination (illustrated above) of DING,LI be as CHEN’s predictably recognizing the change resulting in more accurate (“updated”58) point clouds “every time when a scan over the same sub-scanning region is performed”, CHEN [0031], last S, via figs. 5,6: PNG media_image22.png 1202 699 media_image22.png Greyscale Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over DING et al. (CN 109798903 A) with Google Patents translation in view of LI et al. (CN 114782914 A) with SEARCH machine translation as applied in claims 1 and 13 and 14 further in view of HAN et al. (CN 112560774 A) with SEARCH machine translation: PNG media_image23.png 1140 801 media_image23.png Greyscale PNG media_image24.png 2135 1135 media_image24.png Greyscale Re 12., LI of the combination (illustrated above) of DING,LI teaches The method of claim 1 further comprising (“machine learning”, pg. 2, 8th txt blk) training a (comma-listed) first (“road”) semantic (“map”) segmentation model (“and so on”, pg. 2, 8th txt blk & pg. 5, 5th txt blk) using the data sample. LI of the combination (illustrated above) of DING,LI does not teach the difference of claim 12 of: “semantic segmentation”. HAN teaches the difference of claim 12: semantic segmentation (“comprises a backbone network, a convolutional network and an up-sampling accumulation network”, pg. 2, last txt blk: fig. 3: PNG media_image25.png 391 915 media_image25.png Greyscale Since LI of the combination (illustrated above) of DING,LI teaches a segmentation model, one of skill in the art of segmentation models can make LI’s of the combination (illustrated above) of DING,LI be as HAN’s predictably recognizing in the combination (illustrated above) the change “more quickly and accurately identifying the target obstacle, improving the safety of the automatic driving”, HAN, pg. 7, 1st txt blk. Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over DING et al. (CN 109798903 A) with Google Patents translation in view of LI et al. (CN 114782914 A) with SEARCH machine translation as applied in claims 1 and 13 and 14 further in view of SONG et al. (US 2017/0344015 A1): PNG media_image1.png 1170 801 media_image1.png Greyscale Re 15., DING of the combination of DING,LI teaches (“determining the current position of”, DING, pg. 2, 8tgh txt blk) The vehicle comprising the apparatus (“(system)”, DING, pg. 9, 8h txt blk) of claim 14. DING of the combination of DING,LI does not teach the difference of claim 15 of: “vehicle comprising the apparatus”. SONG teaches the difference of claim 15: vehicle comprising the apparatus (“610, a storage apparatus 620 and a processor 630” [0107]: fig. 6): PNG media_image26.png 550 764 media_image26.png Greyscale Since DING of the combination of DING,LI teaches a vehicle, one of skill in the art of vehicles can make DING’s of the combination of DING,LI be as SONG’s predictably recognizing the change “to provide an improved driverless vehicle and an improved method, apparatus and system for positioning a driverless vehicle, in order to solve the technical problem mentioned in the foregoing Background section”, SONG, [0007]. Conclusion The prior art “nearest to the subject matter defined in the claims” (MPEP 707.05) made of record and not relied upon is considered pertinent to applicant's disclosure. The following table lists several references that are relevant to the subject matter claimed and disclosed in this Application. The references are not relied on by the Examiner, but are provided to assist the Applicant in responding to this Office action. Citation Relevance IDS (2/5/2024) cited OUYANG et al. (CN 115496792 A) with SEARCH machine translation: an x-reference OUYANG teaches “global point cloud map” via page 6: “obtaining the initial position of the mobile robot on the global point cloud map, correcting the front end estimation of the speedometer, for the inter-frame point cloud registration when the front-end milemeter is estimated; using the semantic label to perform inter-class separation and intra-class matching, realizing the acceleration of the point cloud search,” as the closest to the claimed “global point cloud” of claim 1. IDS (2/5/2024) cited Dube et al. (SegMap: Segment-based mapping and localization using data-driven descriptors): an x -reference Dube teaches “global point cloud descriptor” via page 4, rcol, 1st para, 4th S: “Recently, Cop et al. (2018) proposed to leverage LiDAR intensity information with a global point cloud descriptor.” as the closest to the claimed “global point cloud” of claim 1. Common inventor Braun et al.: Moritz Luszek (Quantification of Uncertainties in Deep Learning-based Environment Perception) Braun et al.: Moritz Luszek teaches “concatenate…subsequent point clouds…in time” via page 5, A. Data Preprocessing, 2nd para, 2nd S: “In order to generate denser input scans, we concatenate a fixed amount of subsequent point clouds and compensate for motion of the ego vehicle between successive recordings in time.” as the closest to the claimed “joining the first and second point clouds…in time” of claim 1. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENNIS ROSARIO whose telephone number is (571)272-7397. The examiner can normally be reached Monday-Friday, 9AM-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DENNIS ROSARIO/Examiner, Art Unit 2676 /Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676 1 The preamble’s “for training semantic segmentation models usable in a vehicle assistance system” does not serve to limit or redefine claim 1. 2 cloud: 1) a visible collection of particles of water or ice suspended in the air, usually at an elevation above the earth's surface. 2) any similar mass, especially of smoke or dust. (Dictionary.com) 3 background: one's origin, education, experience, etc., in relation to one's present character, status, etc., wherein experience is defined: knowledge or practical wisdom gained from what one has observed, encountered, or undergone, wherein practical is defined: of or relating to practice or action, wherein practice is defined: custom, wherein custom is defined: convention, wherein convention is defined: conventionalism, wherein conventionalism is defined: adherence to or advocacy of conventional attitudes or practices (Dictionary.com) 4 [[brackets]] represent claim limitations in the preamble that does not serve to further limit or re-define claim 1’s other limitations under the broadest reasonable interpretation of claim 1 via MPEP 5 series: a group or a number of related or similar things, events, etc., arranged or occurring in temporal, spatial, or other order or succession; sequence. (Dictionary.com) 6 periphery: the outermost boundary of an area, wherein area is defined: the extent of a two-dimensional surface enclosed within a specified boundary or geometric figure, wherein enclosed is defined: to close; hem in; surround (Dictionary.com) 7 series: a group or a number of related or similar things, events, etc., arranged or occurring in temporal, spatial, or other order or succession; sequence. (Dictionary.com) 8 series: a group or a number of related or similar things, events, etc., arranged or occurring in temporal, spatial, or other order or succession; sequence. (Dictionary.com) 9 series: a group or a number of related or similar things, events, etc., arranged or occurring in temporal, spatial, or other order or succession; sequence. (Dictionary.com) 10 map: a representation, usually on a flat surface, as of the features of an area of the earth or a portion of the heavens, showing them in their respective forms, sizes, and relationships according to some convention of representation. (Dictionary.com) 11 based: to place or establish on a base or basis; ground; found (usually followed by on orupon ). (Dictionary.com) 12 “based” is a past participle contributing to the action of the claimed “creating” 13 generate: to bring into existence; cause to be; produce, wherein produce is defined: to make or manufacture, where make is defined: to establish or enact; put into existence, wherein establish is defined: to found, institute, build, or bring into being on a firm or stable basis, wherein found is defined: 1) to set up or establish on a firm basis or for enduring existence; 3) to base or ground (usually followed by on or upon ). (Dictionary.com) 14 “based” and “generate” is an identity, wherein identity is defined: Logic., an assertion that two terms (“based” and “generate”) refer to the same thing. (Dictionary.com) 15 map: a representation, usually on a flat surface, as of the features of an area of the earth or a portion of the heavens, showing them in their respective forms, sizes, and relationships according to some convention of representation. (Dictionary.com) 16 On: in connection, association, or cooperation with (Dictionary.com) 17 Markush element follows: [(A) and (B or C)] 18 map: a representation, usually on a flat surface, as of the features of an area of the earth or a portion of the heavens, showing them in their respective forms, sizes, and relationships according to some convention of representation. (Dictionary.com) 19 datum: a single piece of information, as a fact, statistic, or code; an item of data. (Dictionary.com) 20 and: (used to connect [Markush] alternatives) (Dictionary.com) 21 Since Markush alternatives (A) & (B) is taught the Markush element [(A) and (B or C)] is taught under the broadest reasonable interpretation of claim 1. 22 map: a representation, usually on a flat surface, as of the features of an area of the earth or a portion of the heavens, showing them in their respective forms, sizes, and relationships according to some convention of representation. (Dictionary.com) 23 and: (used to connect [Markush] alternatives) (Dictionary.com) 24 sample (noun) 25 as: in the role of; being (Dictionary.com) 26 sample (noun) 27 sample (adjective): serving as a specimen, wherein specimen is defined: (in medicine, microbiology, etc.) a sample of a substance or material for examination or study, wherein material is defined: a group of ideas, facts, data, etc., that may provide the basis for or be incorporated into some integrated work (Dictionary.com): data sample 28 as: in the role of; being (Dictionary.com) 29 is: 3rd person singular present indicative of be. (Dictionary.com) 30 sample (adjective): serving as a specimen, wherein specimen is defined: (in medicine, microbiology, etc.) a sample of a substance or material for examination or study, wherein material is defined: a group of ideas, facts, data, etc., that may provide the basis for or be incorporated into some integrated work (Dictionary.com): data sample 31 Applicant’s disclosure: [0067]The term non-transitory computer-readable medium does not encompass transitory electrical or electromagnetic signals propagating through a medium (such as on a carrier wave). Non-limiting examples of a non-transitory computer-readable medium are nonvolatile memory circuits (such as a flash memory circuit, an erasable programmable read-only memory circuit, or a mask read-only memory circuit), volatile memory circuits (such as a static random access memory circuit or a dynamic random access memory circuit), magnetic storage media (such as an analog or digital magnetic tape or a hard disk drive), and optical storage media (such as a CD, a DVD, or a Blu-ray Disc). 32 2nd Markush element follows: D,E,or F 33 Since the combination of DING,LI,APUY teaches Markush alternative (F), the Markush element [D,E, or F] is taught under the broadest reasonable interpretation of claim 2; thus, Markush alternatives D,E are also taught under the broadest reasonable interpretation of claim 2. 34 Claim 3 is directed to Markush alternative (D) of the Markush element [D,E, or F] and thus is already taught by the combination of DING,LI,APUY in claim 2 under the broadest reasonable interpretation of claims 2,3. 35 Claim 4 is directed to Markush alternative (E) of the Markush element [D,E, or F] and thus is already taught by the combination of DING,LI,APUY in claim 2 under the broadest reasonable interpretation of claims 2,4. 36 described: to pronounce, as by a designating term, phrase, or the like; label. (Dictionary.com) 37 each: every one of two or more considered individually or one by one, wherein one by one is defined: Also, one at a time. Individually in succession, as in The ducklings jumped into the pond one by one, or One at a time they went into the office. Formerly also put as one and one and one after one, this idiom dates from about a.d. 1000, wherein succession is defined: the coming of one person or thing after another in order, sequence, or in the course of events, wherein sequence is defined: Mathematics., a set whose elements have an order similar to that of the positive integers; a map from the positive integers to a given set, wherein integer is defined: Mathematics., one of the positive or negative numbers 1, 2, 3, etc., or zero, wherein number is defined: A member of the set of positive integers. Each number is one of a series of unique symbols, each of which has exactly one predecessor except the first symbol in the series (1), and none of which are the predecessor of more than one number. (Dictionary.com) 38 described: to pronounce, as by a designating term, phrase, or the like; label. (Dictionary.com) 39 of: (used to indicate possession, connection, or association), wherein association is defined: the act of associating or state of being associated, wherein associate is defined: to unite; combine, wherein unite is defined: to join, combine, or incorporate so as to form a single whole or unit, wherein join is defined: to come into contact or union with, wherein union is defined: a number of persons, states, etc., joined or associated together for some common purpose. (Dictionary.com) 40 Markush elements (1)(2)(3) follows: (1) [A or B]; (2) [C or D]; (3) [E or F] 41 accumulate: to gather or become gathered together in an increasing quantity; amass; collect, wherein quantity is defined: a specified or definite amount, weight, number, etc, wherein amount is defined: the total of two or more quantities; sum (Dicationary.com) 42 subset: Mathematics., a set consisting of elements of a given set that can be the same as the given set or smaller, wherein set is defined: Mathematics., a collection of objects or elements classed together. (Dictionary.com) . 43 refinement: an improved, higher, or extreme form of something (Dictionary.com) 44 Claim 9 appears to of shifted away from reconstruction and instead focuses on labelling 45 for: contributive to (Dictionary.com)
Read full office action

Prosecution Timeline

Feb 05, 2024
Application Filed
Dec 08, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586184
METHODS AND APPARATUS FOR ANALYZING PATHOLOGY PATTERNS OF WHOLE-SLIDE IMAGES BASED ON GRAPH DEEP LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12585733
SYSTEMS AND METHODS OF SENSOR DATA FUSION
2y 5m to grant Granted Mar 24, 2026
Patent 12536786
IMAGE LOCALIZATION USING A DIGITAL TWIN REPRESENTATION OF AN ENVIRONMENT
2y 5m to grant Granted Jan 27, 2026
Patent 12518519
PREDICTOR CREATION DEVICE AND PREDICTOR CREATION METHOD
2y 5m to grant Granted Jan 06, 2026
Patent 12518404
SYSTEMS AND METHODS FOR MACHINE LEARNING BASED PHYSIOLOGICAL MOTION MEASUREMENT
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
98%
With Interview (+28.6%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 557 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month