DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Claims 1, 8, and 15 have been amended.
Claims 5-7 and 12-14 have been canceled.
No new claims have been introduced.
Claims 1-4, 8-11, and 15 are currently pending.
Examiner Note
In the correspondence below, where a "node" is claimed, the prior art applied may disclose a node (or "point(s)"), cluster(s), edge, graph, or lane/road features. Specifically, Wheeler (US 20210172756 A1) uses nodes (or points) to create clusters, clusters in combination with edges are used to create graphs, graphs are compiled and used in a neural network to create lane/road features, said features are used to create maps, and maps are used to autonomously navigate a vehicle (e.g., steering, acceleration, and braking). Thus, nodes are the building blocks for the terms above. Because nodes are the building block for the above terms, disclosing any of the of the above terms also discloses at least one node.
Further, the term "encode" or "encoding" will be interpreted with it's plain meaning in the art. Specifically, to convert information into a particular form or format.
For the correspondence below, Wheeler (US 20210172756 A1) discloses creating, refreshing/updating, HD maps containing road geometry, location of lanes, respective lane semantics, center lines, merging lanes, intersections, lane speed, lane trajectory, lane type, landmarks, speed bumps, curbs, signs, traffic lights, etc. These features are interpreted as first, second, third, fourth, and so on, features.
Lastly, the claimed first, second, and third encoders omnipresent in the claims are interpreted as actually being a single piece of hardware, specifically, processor 110 (instant specification [0060-0066] discloses that the three claimed encoders are processor 110).
Any prior art added or removed does not imply that the amendments overcame the prior art, but was removed for overlapping teachings or added for additional teachings.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09-10-2025 has been entered.
The official correspondence below is a first action non-final on an RCE.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-4, 8-11, and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wheeler (US 20210172756 A1) in view Johnston (US 20190285421 A1) and in further view of Szczerba (US 20100253602 A1) and Zhang (CN 110110029 A).
REGARDING CLAIM 1, Wheeler discloses, encoding first feature information associated with a node-edge-level feature of a road graph (Wheeler: [0006] a system uses image pixel classification of lane lines by, for example, running a deep learning algorithm on camera images; [0085] HD map system 110 may be a distributed system comprising a plurality of processors; [0158] FIG. 28B illustrates a flow chart describing the lane line creation process ... determined by an image segmentation deep learning model; [0187] individual lane elements are represented as nodes on the graph connected by edges to other nodes; [FIG. 4A]) from node feature information and edge feature information of the road graph (Wheeler: [0187] Lane elements are stored as pieces of a lane element graph. Within the lane element graph, individual lane elements are represented as nodes on the graph connected by edges to other nodes, representing neighboring lane elements of the graph. The edges connecting two lane elements indicate physical connection between two lane elements that a vehicle can legally traverse ... boundaries between lane lines over which cars cannot cross have a representation distinct from the above edges of the lane element graph) using a first encoder based on a multilayer perceptron (Wheeler: [0164] The HD map system uses deep learning techniques to determine a probability map; [FIG. 4A]; [0082] FIG. 4A illustrates the various layers of instructions in the HD Map API of a vehicle computing system; [0113] the image classification model could implement additional layers in its convolutional neural network; [0161] same information in three-dimensions to provide an additional layer of information to the online HD map system); encoding second feature information associated with a graph-level feature of the road graph from the first feature information using a second encoder based on a graph neural network (GNN) (Wheeler: [0106] In order to build a Landmark Map (LMap) the HD map system needs to know the location and type for every traffic sign. To determine the type of sign, the HD map system uses image based classification. This can be done by a human operator or automatically by deep learning algorithms; [ABS] The system builds a large connected network of lane elements and their connections as a lane element graph; [0191-0192] Each configuration can be represented as a directed graph, with node being the lane elements drive into/out of the intersection, and edges are the lane connectors. Each node is labeled with diving restrictions ... The lane element graph module 470 identifies 3602 lane cuts from lane lines and navigable boundaries. The lane cut lines and navigable boundaries are generated from a plurality of received image frames from an imaging system mounted on a vehicle; [FIG. 38ABC, 39, 40]; [FIG. 4A]; [0113] the image classification model could implement additional layers in its convolutional neural network; [0161] same information in three-dimensions to provide an additional layer of information to the online HD map system); encoding third feature information associated with a time-series feature of the road graph from a series of the second feature information using a third encoder based on a recurrent neural network (Wheeler: [0058] freshness of data by ensuring that the map is updated to reflect changes on the road within a reasonable time frame; [0080] a time-to-live (TTL) parameter specifying a time period after which the route data can be deleted; [FIG. 4A]; [0082] FIG. 4A illustrates the various layers of instructions in the HD Map API of a vehicle computing system; [0113] the image classification model could implement additional layers in its convolutional neural network; [0161] same information in three-dimensions to provide an additional layer of information to the online HD map system), and outputting control information of a vehicle mapped to the road graph from the third feature information (Wheeler: [0054] Embodiments of the invention maintain high definition (HD) maps containing up to date information using high precision. The HD maps may be used by autonomous vehicles to safely navigate to their destinations without human input or with limited human input (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features used to navigate); [0066] The vehicle controls 130 control the physical movement of the vehicle, for example, acceleration, direction change, starting, stopping, and so on. The vehicle controls 130 include the machinery for controlling the accelerator, brakes, steering wheel, and so on) by reinforcement learning for a policy network (Wheeler: [0113] implement additional layers in its convolutional neural network; [0134] all images of cluster compared against corresponding deep learning result. The HD map system determines a weighted aggregate of these scores and ranks the features to select the best features; [0137] The HD map system 110 applies a convolutional neural network model to the identified portion of the image; [0184-0185] The HD map system applies machine learning techniques (e.g., deep learning) to these images to extract road features (e.g., lane lines). In an embodiment, the HD map system merges the gray-scale image and RGB image into a single 4-channel matrix to learn the model since deep learning can process the input data independent of the number of channels in the input data ... deep learning step, each image pixel is labeled as either “lane line” or “not lane line”. In some embodiments, the HD map system uses machine learning based models that further categorize lane lines into different types; [0058] freshness of data by ensuring that the map is updated to reflect changes on the road within a reasonable time frame (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features updated); [0064] keep the HD map data stored locally in the vehicle updated on a regular basis (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features updated)), and wherein, the node feature information or edge feature information of the road graph change according to a movement of the vehicle (Wheeler: [0058] freshness of data by ensuring that the map is updated to reflect changes on the road within a reasonable time frame (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features updated); [0064] keep the HD map data stored locally in the vehicle updated on a regular basis (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features updated)), wherein the road graph includes at least one node (Wheeler: [0085] a lane element graph module 470; [0187] Lane elements are stored as pieces of a lane element graph. Within the lane element graph, individual lane elements are represented as nodes on the graph connected by edges to other nodes, representing neighboring lane elements of the graph; [FIG. 38ABC, 39, 40]; [0191] represented as a directed graph, with node being the lane elements drive into/out of the intersection, and edges are the lane connectors. Each node is labeled with diving restrictions), each node corresponding to a point on a road (Wheeler: [0187] the lane element graph, individual lane elements are represented as nodes on the graph connected by edges to other nodes), and at least one edge (Wheeler: [0187] the lane element graph, individual lane elements are represented as nodes on the graph connected by edges to other nodes), each edge corresponding to a connection relationship between nodes (Wheeler: [0187] the lane element graph, individual lane elements are represented as nodes on the graph connected by edges to other nodes), to output the second feature information from the first feature information (Wheeler: [0106]; [ABS]; [0191-0192] see intersections, lanes, lane trajectory, edges, and labels; [FIG. 38ABC, 39, 40]; [FIG. 4A]; [0082]; [0113]; [0161]), and updates the node feature information of the road graph at the time of each encoding execution (Wheeler: [0055] Embodiments of the invention generate and maintain high definition (HD) maps that are accurate and include the most updated road conditions for safe navigation. For example, the HD maps provide the current location of the autonomous vehicle relative to the lanes of the road precisely enough to allow the autonomous vehicle to drive safely in the lane; [0058] freshness of data by ensuring that the map is updated to reflect changes on the road within a reasonable time frame; [0062-0064]; [0085-0086]; [0177] The threshold parameters may be determined manually or based on the characteristics of the polyline 3354. In the aforementioned embodiment, extraneous points 3380 must be analyzed to confirm that they are not inflection points. For example, all lane line points 2925 on the polyline between the endpoints are analyzed to identify any points greater than a threshold distance from the polyline. If no lane line points are identified, all points between the endpoints are removed from the polyline consists of only the endpoints. Alternatively, if a lane line point 2925 is identified with a distance from the polyline 3354 above a threshold distance, the polyline 3354 is shortened by adjusting one endpoint closer to the identified lane line point. Adjusting the endpoints of the polyline 3354 may be performed by identifying a first midpoint of the entire polyline and identifying any lane line points 2925 between the first midpoint and the first endpoint of the polyline that are a distance greater than the threshold distance from the polyline. If no lane line point is identified, the first midpoint is set as a new endpoint and the above process is performed for a second midpoint that lies between the first midpoint and the second endpoint. If a lane line point 2925 is identified, each lane line point 2925 between the first midpoint and the first endpoint is analyzed. Once the lane line point 2925 has been identified, it is set as a new endpoint for the polyline 3354. The processes described above are performed iteratively until the polyline endpoint and the identified lane line point 2925 overlap at the same point), to be reflected in the second feature information associated with the graph-level feature (Wheeler: [0106]; [ABS]; [0191-0192]; [FIG. 38ABC, 39, 40]; [FIG. 4A]; [0082]; [0113]; [0161]), and wherein the encoding of the third feature information includes executing the third encoder at predetermined time intervals (Wheeler: [0058] freshness of data by ensuring that the map is updated to reflect changes on the road within a reasonable time frame (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features updated); [0064] keep the HD map data stored locally in the vehicle updated on a regular basis (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features updated)) to output the third feature information associated with a time-series feature of the road graph changing (Wheeler: [0058] freshness of data by ensuring that the map is updated to reflect changes on the road within a reasonable time frame (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features updated); [0064] keep the HD map data stored locally in the vehicle updated on a regular basis (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features updated)) according to the movement of the vehicle on the road over time from a series of the second feature information (Wheeler: [0070] The perception module 210 receives sensor data 230 from the sensors 105 of the vehicle 150. This includes data collected by cameras of the car, LIDAR, IMU, GPS navigation system, and so on. The perception module 210 uses the sensor data to determine what objects are around the vehicle, the details of the road on which the vehicle is travelling; [0086] The map creation module 410 creates the map from map data collected from several vehicles that are driving along various routes. Map data may comprise traffic signs to be stored in the map as will be described further in FIGS. 9 & 10. The map update module 420 updates previously computed map data by receiving more recent information from vehicles that recently travelled along routes on which map information changed. For example, if certain road signs have changed or lane information has changed as a result of construction in a region, the map update module 420 updates the maps accordingly).
Wheeler discloses “[0222] In one embodiment, lane cut generation occurs after all input features (i.e., explicit/implicit lane lines and navigable boundaries) have been curated. Although more complexity is added to the feature review workflow, as there are dependencies among feature types (lane lines and navigable boundaries are reviewed before lane cuts become available), the detection of topological changes in road network can be done with more confidence and detected lane cuts are more likely to be correct”.
Wheeler does not explicitly disclose, the encoding of the second feature information includes in which the second encoder based on the GNN performing encoding a predetermined number of times wherein the predetermined number of times is a parameter for a distance of a neighboring node.
However, in the same field of endeavor, Johnston discloses, the encoding of the second feature information includes in which the second encoder based on the GNN performing encoding a predetermined number of times wherein the predetermined number of times is a parameter for a distance of a neighboring node (Johnston: [0057] the node having a node position that is closest (e.g., based on Euclidean distance or another distance measure) to the position of the instance of location information/data satisfies a distance threshold requirement, the instance of location information/data may be matched to that node. If the node having a node position that is closest (e.g., based on Euclidean distance or another distance measure) to the position of the instance of location information/data does not satisfy the distance threshold requirement, the instance of location information/data may be marked as being not matchable to an existing node of the LNG model), for the benefit of generating and updating a lane network graph (LNG) model.
In this case, “wherein the predetermined number of times is a parameter for a distance of a neighboring node” is interpreted as collecting data meeting a sample criteria (a threshold, in this case a distance requirement).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system disclosed by Wheeler to include updating using only data that meets a predetermined criterion taught by Johnston. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to generate and update a lane network graph (LNG) model.
As cited above, the examiner submits, Wheeler, as modified, discloses, the encoding of the second feature information includes in which the second encoder based on the GNN performing encoding a predetermined number of times. However, should it be found Wheeler, as modified, does not explicitly disclose, the encoding of the second feature information includes in which the second encoder based on the GNN performing encoding a predetermined number of times, in the same field of endeavor, Szczerba discloses, ([0154] programming to monitor inputs from various sources; discern from the inputs critical information by applying critical criteria including preset thresholds, learned thresholds, and/or selectable thresholds to the inputs, wherein the thresholds are set to minimize non-critical distractions upon the operator; and requests graphics for display based upon the critical information; [0155] Thresholds determining critical information from the inputs can be based upon a number of bases. The HUD system manager has access to a number of input sources of information and includes various programmed applications to create a contextual operational environment model to determine whether gathered information is critical information...), for the benefit of determining a confidence of the information from different sources.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system disclosed by a modified Wheeler to create thresholds for critical information taught by Szczerba. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to determine a confidence of the information from different sources.
The examiner respectfully submits, Wheeler, as modified, discloses, the predetermined number of times is a parameter for a distance of a neighboring node.
However, should it be found Wheeler, as modified, fails to disclose, the predetermined number of times is a parameter for a distance of a neighboring node, in the same field of endeavor, Zhang discloses, the predetermined number of times is a parameter for a distance of a neighboring node (Zhang: [ABS] determining the distance threshold value of the preset distance between the current sampling moment of the positioning data and the positioning data indicated by the history sampling time is greater than between the position; the response distance is greater than the preset threshold to determining, according to the preset map, obtaining candidate lane information set matching with the positioning data of the current sampling moment), for the benefit of speeding up lane matching according to the positioning data.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system disclosed by a modified Wheeler to create thresholds for candidate lane information taught by Zhang. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to speed up lane matching according to the positioning data.
REGARDING CLAIM 2, Wheeler, as modified, remains as applied above to claim 1, and further, Wheeler also discloses, generating the node feature information based on a positional relationship between a vehicle on the road and a node of the road graph (Wheeler: [0057]); and generating the edge feature information according to a driving direction of the road (Wheeler: [0103]; [0160]; [0187]; [0219]).
REGARDING CLAIM 3, Wheeler, as modified, remains as applied above to claim 1, and further, Wheeler also discloses, the node feature information includes information about a relative position between a node and a vehicle (Wheeler: see above ([0057]; [0103]; [0160]; [0187]; [0219]); [0055]; [0057]), speed information of the vehicle (Wheeler: [0093]; [0186]; [0189]), and information about whether a node is a node closest to the vehicle (Wheeler: see above ([0103]; [0160]; [0187]; [0219]); [0055]; [0057]; [0078]; [0093]; [0189]).
REGARDING CLAIM 4, Wheeler, as modified, remains as applied above to claim 1, and further, Wheeler also discloses, the encoding of the first feature information includes: encoding node-level feature information from node feature information of each node of the road graph by executing the first encoder (Wheeler: [FIG. 41]; [0049]; [0204]; [0184-0185]); encoding edge-level feature information from edge feature information of each edge of the road graph by executing the first encoder (Wheeler: [0159]; [0187]; [0192]); and outputting the first feature information based on the node-level feature information and the edge-level feature information (Wheeler: [0007]).
REGARDING CLAIM 8, Wheeler 8 discloses, a processor (Wheeler: [0052]); and a memory configured to store a road graphical neural network (Road-GNN) (Wheeler: [0054]) including a first encoder, a second encoder, and a third encoder (Wheeler: [FIG. 4A]; [0082]; [0113]; [0161]), and at least one instruction (Wheeler: [0052]), wherein, when executed by the processor, the at least one instruction is configured to cause the processor to perform: a first operation of encoding first feature information associated with a node-edge-level feature of a road graph (Wheeler: [0006]; [0158]; [0187]) from node feature information and edge feature information of the road graph (Wheeler: [0187]) using the first encoder based on a multilayer perceptron (Wheeler: [0164]; [FIG. 4A]; [0082]; [0113]; [0161]); a second operation of encoding second feature information associated with a graph-level feature of the road graph from the first feature information using the second encoder based on a graph neural network (GNN) (Wheeler: [0106]; [ABS]; [0191-0192]; [FIG. 38ABC, 39, 40]; [FIG. 4A] encoders; [0082]; [0113]; [0161]); and a third operation of encoding third feature information associated with a time-series feature of the road graph from a series of the second feature information using a third encoder (Wheeler: [0058]; [0080]; [FIG. 4A]; [0082]; [0113]; [0161]) based on a recurrent neural network (Wheeler: [0086]; [0106]), and a fourth operation of outputting control information of a vehicle mapped to the road graph from the third feature information by reinforcement learning for a policy network (Wheeler: [0054] Embodiments of the invention maintain high definition (HD) maps containing up to date information using high precision. The HD maps may be used by autonomous vehicles to safely navigate to their destinations without human input or with limited human input; [0066] The vehicle controls 130 control the physical movement of the vehicle, for example, acceleration, direction change, starting, stopping, and so on. The vehicle controls 130 include the machinery for controlling the accelerator, brakes, steering wheel, and so on), wherein, the node feature information or edge feature information of the road graph change according to a movement of the vehicle (Wheeler: [0058] freshness of data by ensuring that the map is updated to reflect changes on the road within a reasonable time frame (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features updated); [0064] keep the HD map data stored locally in the vehicle updated on a regular basis (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features updated)), and the road graph includes at least one node (Wheeler: [0085]; [0187]; [FIG. 38ABC, 39, 40]; [0191]), each node corresponding to a point on a road, and at least one edge, each edge corresponding to a connection relationship between nodes (Wheeler: [0187]), wherein the second operation includes an operation of encoding by the second encoder (Wheeler: [0070] The perception module 210 receives sensor data 230 from the sensors 105 of the vehicle 150. This includes data collected by cameras of the car, LIDAR, IMU, GPS navigation system, and so on. The perception module 210 uses the sensor data to determine what objects are around the vehicle, the details of the road on which the vehicle is travelling; [0086]) and updating the node feature information of the road graph (Wheeler: [0055]; [0058]; [0062-0064]; [0085-0086]; [0177]) at the time of each encoding execution (Wheeler: [0177] The processes described above are performed iteratively until the polyline endpoint and the identified lane line point 2925 overlap at the same point), to be reflected in the second feature information associated with the graph-level feature (Wheeler: [0106]; [ABS]; [0191-0192]; [FIG. 38ABC, 39, 40]; [FIG. 4A]; [0082]; [0113]; [0161]) and wherein the third operation includes an operation of executing the third encoder at predetermined time intervals (Wheeler: [0058] freshness of data by ensuring that the map is updated to reflect changes on the road within a reasonable time frame (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features updated); [0064] keep the HD map data stored locally in the vehicle updated on a regular basis (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features updated)) to output the third feature information associated with a time-series feature of the road graph (Wheeler: [0058] freshness of data by ensuring that the map is updated to reflect changes on the road within a reasonable time frame (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features updated); [0064] keep the HD map data stored locally in the vehicle updated on a regular basis (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features updated)) changing according to the movement of the vehicle on the road over time from a series of the second feature information (Wheeler: [0070] The perception module 210 receives sensor data 230 from the sensors 105 of the vehicle 150. This includes data collected by cameras of the car, LIDAR, IMU, GPS navigation system, and so on. The perception module 210 uses the sensor data to determine what objects are around the vehicle, the details of the road on which the vehicle is travelling; [0086] The map creation module 410 creates the map from map data collected from several vehicles that are driving along various routes. Map data may comprise traffic signs to be stored in the map as will be described further in FIGS. 9 & 10. The map update module 420 updates previously computed map data by receiving more recent information from vehicles that recently travelled along routes on which map information changed. For example, if certain road signs have changed or lane information has changed as a result of construction in a region, the map update module 420 updates the maps accordingly).
Wheeler does not explicitly recite the terminology first, second, and third encoder. However, Wheeler discloses encoding first, second, third features (see [0101-0103], [0183-0185], and [0200-0204] for features), a plurality of processors ([0085]), and a plurality of encoding modules ([FIG. 4A, 4B]), for the benefit of providing a geometric and semantic description of the world around the vehicle including descriptions of various portions of lanes.
Wheeler discloses, to output the second feature information from the first feature information, to be reflected in the second feature information associated with the graph-level feature (Wheeler: [0106]; [ABS]; [0191-0192]; [FIG. 38ABC, 39, 40]; [FIG. 4A]; [0082]; [0113]; [0161]). Wheeler also discloses “[0222] In one embodiment, lane cut generation occurs after all input features (i.e., explicit/implicit lane lines and navigable boundaries) have been curated. Although more complexity is added to the feature review workflow, as there are dependencies among feature types (lane lines and navigable boundaries are reviewed before lane cuts become available), the detection of topological changes in road network can be done with more confidence and detected lane cuts are more likely to be correct”.
Wheeler does not explicitly disclose, based on the GNN performing a predetermined number of times to output the second feature information from the first feature information, and the predetermined number of times is a parameter for a distance of a neighboring node.
However, in the same field of endeavor, Johnston discloses, based on the GNN performing a predetermined number of times to output the second feature information from the first feature information, and the predetermined number of times is a parameter for a distance of a neighboring node (Johnston: [0057] the node having a node position that is closest (e.g., based on Euclidean distance or another distance measure) to the position of the instance of location information/data satisfies a distance threshold requirement, the instance of location information/data may be matched to that node. If the node having a node position that is closest (e.g., based on Euclidean distance or another distance measure) to the position of the instance of location information/data does not satisfy the distance threshold requirement, the instance of location information/data may be marked as being not matchable to an existing node of the LNG model), for the benefit of generating and updating a lane network graph (LNG) model.
In this case, “wherein the predetermined number of times is a parameter for a distance of a neighboring node” is interpreted as collecting data meeting a sample criteria (a threshold, in this case a distance requirement).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system disclosed by Wheeler to include updating using only data that meets a predetermined criterion taught by Johnston. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to generate and update a lane network graph (LNG) model.
As cited above, the examiner submits, Wheeler, as modified, discloses, the encoding of the second feature information includes in which the second encoder based on the GNN performing encoding a predetermined number of times. However, should it be found Wheeler, as modified, does not explicitly disclose, the encoding of the second feature information includes in which the second encoder based on the GNN performing encoding a predetermined number of times, in the same field of endeavor, Szczerba discloses, ([0154] programming to monitor inputs from various sources; discern from the inputs critical information by applying critical criteria including preset thresholds, learned thresholds, and/or selectable thresholds to the inputs, wherein the thresholds are set to minimize non-critical distractions upon the operator; and requests graphics for display based upon the critical information; [0155] Thresholds determining critical information from the inputs can be based upon a number of bases. The HUD system manager has access to a number of input sources of information and includes various programmed applications to create a contextual operational environment model to determine whether gathered information is critical information...), for the benefit of determining a confidence of the information from different sources.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system disclosed by a modified Wheeler to create thresholds for critical information taught by Szczerba. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to determine a confidence of the information from different sources.
The examiner respectfully submits, Wheeler, as modified, discloses, the predetermined number of times is a parameter for a distance of a neighboring node.
However, should it be found Wheeler, as modified, fails to disclose, the predetermined number of times is a parameter for a distance of a neighboring node, in the same field of endeavor, Zhang discloses, the predetermined number of times is a parameter for a distance of a neighboring node (Zhang: [ABS] determining the distance threshold value of the preset distance between the current sampling moment of the positioning data and the positioning data indicated by the history sampling time is greater than between the position; the response distance is greater than the preset threshold to determining, according to the preset map, obtaining candidate lane information set matching with the positioning data of the current sampling moment), for the benefit of speeding up lane matching according to the positioning data.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system disclosed by a modified Wheeler to create thresholds for candidate lane information taught by Zhang. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to speed up lane matching according to the positioning data.
REGARDING CLAIM 9, Wheeler, as modified, remains as applied above to claim 8, and further, Wheeler also discloses, generate the node feature information based on a positional relationship between a vehicle on the road and a node of the road graph (Wheeler: [0057]); and generate the edge feature information according to a driving direction of the road (Wheeler: [0103]; [0160]; [0187]; [0219]).
REGARDING CLAIM 10, Wheeler, as modified, remains as applied above to claim 8, and further, Wheeler also discloses, the node feature information includes information about a relative position between a node and a vehicle (Wheeler: ([0103]; [0160]; [0187]; [0219]); [0055]; [0057] features on the road relative to the vehicle's position), speed information of the vehicle (Wheeler: [0093]; [0186]; [0189]), and information about whether a node is a node closest to the vehicle (Wheeler: ([0103]; [0160]; [0187]; [0219]); [0055]; [0057]; [0093]; [0189]).
REGARDING CLAIM 11, Wheeler, as modified, remains as applied above to claim 8, and further, Wheeler also discloses, the first operation includes operations of: encoding node-level feature information from node feature information of each node of the road graph by executing the first encoder (Wheeler: [FIG. 41]; [0049]; [0204]; [0184-0185]); encoding edge-level feature information from edge feature information of each edge of the road graph by executing the first encoder (Wheeler: [0159]; [0187]; [0192]); and outputting the first feature information based on the node-level feature information and the edge-level feature information (Wheeler: [0007]).
REGARDING CLAIM 15, Wheeler discloses, encoding first feature information associated with a node-edge-level feature of a road graph (Wheeler: [0187] Lane elements are stored as pieces of a lane element graph. Within the lane element graph, individual lane elements are represented as nodes on the graph connected by edges to other nodes, representing neighboring lane elements of the graph. The edges connecting two lane elements indicate physical connection between two lane elements that a vehicle can legally traverse ... boundaries between lane lines over which cars cannot cross have a representation distinct from the above edges of the lane element graph) from node feature information and edge feature information of the road graph using a first encoder based on a multilayer perceptron (Wheeler: [0164] The HD map system uses deep learning techniques to determine a probability map; [FIG. 4A]; [0082] FIG. 4A illustrates the various layers of instructions in the HD Map API of a vehicle computing system; [0113] the image classification model could implement additional layers in its convolutional neural network; [0161] same information in three-dimensions to provide an additional layer of information to the online HD map system); encoding second feature information associated with a graph-level feature of the road graph from the first feature information using a second encoder based on a graph neural network (GNN) (Wheeler: [0106] In order to build a Landmark Map (LMap) the HD map system needs to know the location and type for every traffic sign. To determine the type of sign, the HD map system uses image based classification. This can be done by a human operator or automatically by deep learning algorithms; [ABS] The system builds a large connected network of lane elements and their connections as a lane element graph; [0191-0192] Each configuration can be represented as a directed graph, with node being the lane elements drive into/out of the intersection, and edges are the lane connectors. Each node is labeled with diving restrictions ... The lane element graph module 470 identifies 3602 lane cuts from lane lines and navigable boundaries. The lane cut lines and navigable boundaries are generated from a plurality of received image frames from an imaging system mounted on a vehicle; [FIG. 38ABC, 39, 40]; [FIG. 4A]; [0082] FIG. 4A illustrates the various layers of instructions in the HD Map API of a vehicle computing system; [0113] the image classification model could implement additional layers in its convolutional neural network; [0161] same information in three-dimensions to provide an additional layer of information to the online HD map system); encoding third feature information associated with a time-series feature of the road graph from a series of the second feature information using a third encoder based on a recurrent neural network (Wheeler: [0058] freshness of data by ensuring that the map is updated to reflect changes on the road within a reasonable time frame; [0080] a time-to-live (TTL) parameter specifying a time period after which the route data can be deleted; [FIG. 4A]; [0082] FIG. 4A illustrates the various layers of instructions in the HD Map API of a vehicle computing system; [0113] the image classification model could implement additional layers in its convolutional neural network; [0161] same information in three-dimensions to provide an additional layer of information to the online HD map system); and outputting control information of a vehicle mapped to the road graph from the third feature information (Wheeler: [0054] Embodiments of the invention maintain high definition (HD) maps containing up to date information using high precision. The HD maps may be used by autonomous vehicles to safely navigate to their destinations without human input or with limited human input (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features used to navigate); [0066] The vehicle controls 130 control the physical movement of the vehicle, for example, acceleration, direction change, starting, stopping, and so on. The vehicle controls 130 include the machinery for controlling the accelerator, brakes, steering wheel, and so on) by reinforcement learning for a policy network (Wheeler: [0113] implement additional layers in its convolutional neural network; [0134] all images of cluster compared against corresponding deep learning result. The HD map system determines a weighted aggregate of these scores and ranks the features to select the best features; [0137] The HD map system 110 applies a convolutional neural network model to the identified portion of the image; [0184-0185] The HD map system applies machine learning techniques (e.g., deep learning) to these images to extract road features (e.g., lane lines). In an embodiment, the HD map system merges the gray-scale image and RGB image into a single 4-channel matrix to learn the model since deep learning can process the input data independent of the number of channels in the input data ... deep learning step, each image pixel is labeled as either “lane line” or “not lane line”. In some embodiments, the HD map system uses machine learning based models that further categorize lane lines into different types; [0058] freshness of data by ensuring that the map is updated to reflect changes on the road within a reasonable time frame (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features updated); [0064] keep the HD map data stored locally in the vehicle updated on a regular basis (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features updated)), and wherein, the node feature information or edge feature information of the road graph change according to a movement of the vehicle (Wheeler: [0058] freshness of data by ensuring that the map is updated to reflect changes on the road within a reasonable time frame (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features updated); [0064] keep the HD map data stored locally in the vehicle updated on a regular basis (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features updated)), wherein the road graph includes at least one node (Wheeler: [0085] a lane element graph module 470; [0187] Lane elements are stored as pieces of a lane element graph. Within the lane element graph, individual lane elements are represented as nodes on the graph connected by edges to other nodes, representing neighboring lane elements of the graph; [FIG. 38ABC, 39, 40]; [0191] represented as a directed graph, with node being the lane elements drive into/out of the intersection, and edges are the lane connectors. Each node is labeled with diving restrictions), each node corresponding to a point on a road (Wheeler: [0187] the lane element graph, individual lane elements are represented as nodes on the graph connected by edges to other nodes), and at least one edge (Wheeler: [0187] the lane element graph, individual lane elements are represented as nodes on the graph connected by edges to other nodes), each edge corresponding to a connection relationship between nodes (Wheeler: [0187] the lane element graph, individual lane elements are represented as nodes on the graph connected by edges to other nodes), to output the second feature information from the first feature information (Wheeler: [0106]; [ABS]; [0191-0192]; [FIG. 38ABC, 39, 40]; [FIG. 4A]; [0082]; [0113]; [0161]), and updates the node feature information of the road graph at the time of each encoding execution (Wheeler: [0055] Embodiments of the invention generate and maintain high definition (HD) maps that are accurate and include the most updated road conditions for safe navigation. For example, the HD maps provide the current location of the autonomous vehicle relative to the lanes of the road precisely enough to allow the autonomous vehicle to drive safely in the lane; [0058] freshness of data by ensuring that the map is updated to reflect changes on the road within a reasonable time frame; [0062-0064]; [0085-0086]; [0177] The threshold parameters may be determined manually or based on the characteristics of the polyline 3354. In the aforementioned embodiment, extraneous points 3380 must be analyzed to confirm that they are not inflection points. For example, all lane line points 2925 on the polyline between the endpoints are analyzed to identify any points greater than a threshold distance from the polyline. If no lane line points are identified, all points between the endpoints are removed from the polyline consists of only the endpoints. Alternatively, if a lane line point 2925 is identified with a distance from the polyline 3354 above a threshold distance, the polyline 3354 is shortened by adjusting one endpoint closer to the identified lane line point. Adjusting the endpoints of the polyline 3354 may be performed by identifying a first midpoint of the entire polyline and identifying any lane line points 2925 between the first midpoint and the first endpoint of the polyline that are a distance greater than the threshold distance from the polyline. If no lane line point is identified, the first midpoint is set as a new endpoint and the above process is performed for a second midpoint that lies between the first midpoint and the second endpoint. If a lane line point 2925 is identified, each lane line point 2925 between the first midpoint and the first endpoint is analyzed. Once the lane line point 2925 has been identified, it is set as a new endpoint for the polyline 3354. The processes described above are performed iteratively until the polyline endpoint and the identified lane line point 2925 overlap at the same point), to be reflected in the second feature information associated with the graph-level feature (Wheeler: [0106]; [ABS]; [0191-0192]; [FIG. 38ABC, 39, 40]; [FIG. 4A]; [0082]; [0113]; [0161]), and wherein the encoding of the third feature information includes executing the third encoder at predetermined time intervals (Wheeler: [0058] freshness of data by ensuring that the map is updated to reflect changes on the road within a reasonable time frame (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features updated); [0064] keep the HD map data stored locally in the vehicle updated on a regular basis (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features updated)) to output the third feature information associated with a time-series feature of the road graph changing (Wheeler: [0058] freshness of data by ensuring that the map is updated to reflect changes on the road within a reasonable time frame (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features updated); [0064] keep the HD map data stored locally in the vehicle updated on a regular basis (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features updated)) according to the movement of the vehicle on the road over time from a series of the second feature information (Wheeler: [0070] The perception module 210 receives sensor data 230 from the sensors 105 of the vehicle 150. This includes data collected by cameras of the car, LIDAR, IMU, GPS navigation system, and so on. The perception module 210 uses the sensor data to determine what objects are around the vehicle, the details of the road on which the vehicle is travelling; [0086] The map creation module 410 creates the map from map data collected from several vehicles that are driving along various routes. Map data may comprise traffic signs to be stored in the map as will be described further in FIGS. 9 & 10. The map update module 420 updates previously computed map data by receiving more recent information from vehicles that recently travelled along routes on which map information changed. For example, if certain road signs have changed or lane information has changed as a result of construction in a region, the map update module 420 updates the maps accordingly).
Wheeler discloses “[0222] In one embodiment, lane cut generation occurs after all input features (i.e., explicit/implicit lane lines and navigable boundaries) have been curated. Although more complexity is added to the feature review workflow, as there are dependencies among feature types (lane lines and navigable boundaries are reviewed before lane cuts become available), the detection of topological changes in road network can be done with more confidence and detected lane cuts are more likely to be correct”.
Wheeler does not explicitly disclose, the encoding of the second feature information includes in which the second encoder based on the GNN performing encoding a predetermined number of times wherein the predetermined number of times is a parameter for a distance of a neighboring node.
However, in the same field of endeavor, Johnston discloses, the encoding of the second feature information includes in which the second encoder based on the GNN performing encoding a predetermined number of times wherein the predetermined number of times is a parameter for a distance of a neighboring node (Johnston: [0057] the node having a node position that is closest (e.g., based on Euclidean distance or another distance measure) to the position of the instance of location information/data satisfies a distance threshold requirement, the instance of location information/data may be matched to that node. If the node having a node position that is closest (e.g., based on Euclidean distance or another distance measure) to the position of the instance of location information/data does not satisfy the distance threshold requirement, the instance of location information/data may be marked as being not matchable to an existing node of the LNG model), for the benefit of generating and updating a lane network graph (LNG) model.
In this case, “wherein the predetermined number of times is a parameter for a distance of a neighboring node” is interpreted as collecting data meeting a sample criteria (a threshold, in this case a distance requirement).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system disclosed by a modified Wheeler to include updating using only data that meets a predetermined criterion taught by Johnston. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to generate and update a lane network graph (LNG) model.
As cited above, the examiner submits, Wheeler, as modified, discloses, the encoding of the second feature information includes in which the second encoder based on the GNN performing encoding a predetermined number of times. However, should it be found Wheeler, as modified, does not explicitly disclose, the encoding of the second feature information includes in which the second encoder based on the GNN performing encoding a predetermined number of times, in the same field of endeavor, Szczerba discloses, ([0154] programming to monitor inputs from various sources; discern from the inputs critical information by applying critical criteria including preset thresholds, learned thresholds, and/or selectable thresholds to the inputs, wherein the thresholds are set to minimize non-critical distractions upon the operator; and requests graphics for display based upon the critical information; [0155] Thresholds determining critical information from the inputs can be based upon a number of bases. The HUD system manager has access to a number of input sources of information and includes various programmed applications to create a contextual operational environment model to determine whether gathered information is critical information...), for the benefit of determining a confidence of the information from different sources.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system disclosed by a modified Wheeler to create thresholds for critical information taught by Szczerba. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to determine a confidence of the information from different sources.
The examiner respectfully submits, Wheeler, as modified, discloses, the predetermined number of times is a parameter for a distance of a neighboring node.
However, should it be found Wheeler, as modified, fails to disclose, the predetermined number of times is a parameter for a distance of a neighboring node, in the same field of endeavor, Zhang discloses, wherein the predetermined number of times is a parameter for a distance of a neighboring node (Zhang: [ABS] determining the distance threshold value of the preset distance between the current sampling moment of the positioning data and the positioning data indicated by the history sampling time is greater than between the position; the response distance is greater than the preset threshold to determining, according to the preset map, obtaining candidate lane information set matching with the positioning data of the current sampling moment), for the benefit of speeding up lane matching according to the positioning data.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system disclosed by a modified Wheeler to create thresholds for candidate lane information taught by Zhang. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to speed up lane matching according to the positioning data.
Response to Arguments
Applicant's arguments filed 09-10-2025, beginning on page 9, have been fully considered but they are not persuasive. To the examiner’s nest understanding, the applicant has contended that the prior art of Wheeler (US 20210172756 A1) fails to disclose:
outputting control information of a vehicle mapped to the road graph from the third feature information
[0054] Embodiments of the invention maintain high definition (HD) maps containing up to date information using high precision. The HD maps may be used by autonomous vehicles to safely navigate to their destinations without human input or with limited human input (examiner: see [0101-0103], [0183-0185], and [0200-0204] for features used to navigate);
[0066] The vehicle controls 130 control the physical movement of the vehicle, for example, acceleration, direction change, starting, stopping, and so on. The vehicle controls 130 include the machinery for controlling the accelerator, brakes, steering wheel, and so on
[0083] the common HD map API layer 330 may invoke functionality provided by the vehicle manufacturer adapter 310 to send specific control instructions to the vehicle controls
[0101] the HD map system 100 stores a representation of a network of lanes to allow a vehicle to plan a legal path between a source and a destination and to add a frame of reference for real time sensing and control of the vehicle
by reinforcement learning for a policy network
[0128] According to an embodiment, the HD map system focuses on 3D location of the points and disregards the 2D. In this approach the HD map system refines the location of the 3D sign vertices by optimizing the distance between each pair of vertices simultaneously with the plane orientation and location. The HD map system changes the distance between points by perturbing the image coordinates of each point used to project onto the 3D plane. This results in a 2×N +6 degree of freedom optimization problem, where N is the number of sign vertices. This produces a regularized 3D geometry that projects onto the best fit plane. Although this produces the best 3D sign, it does not minimize the 2D reprojection error. Since a sign is labelled on a single sample, the HD map system relies on the aggregation of signs in the automated sign creation to reduce the reprojection error across image samples. [0129] According to another embodiment, the HD map system minimizes the plane fit error and the reprojection error. In the optimization, the HD map system minimizes (1/N)*(sum squared plane fit error)+lambda*(1/N)*(sum squared 3d reprojection errors). Where N is the number of vertices and lambda is a regularization term to balance plane fit and reprojection errors. The HD map system measures the plane fitting error across all points that were inliers in the initial RANSAC plane fitting and measures the reprojection error across all image samples supplied for feature creation. This provides the HD map system the ability to minimize the plane fit and reprojection error on a single image during automation and then rerun the process across all image samples the feature is visible from during the sign aggregation step to minimize the plane fitting and reprojection error across all image samples;
[0134] Creating a sign hypothesis for every image sample ... the HD map system selects scores based on various criteria including closeness to median area of cluster, angle between sign normal and car heading, reprojection error of sign compared against deep learning detection result, reprojection error of sign on all images of cluster compared against corresponding deep learning result; [0182-0184]
To the examiner’s best understanding, the applicant has emphasized that the prior art of Wheeler (US 20210172756 A1) fails to disclose “reinforcement learning”. The examiner respectfully submits, to the examiner’s best understanding, “reinforcement learning” in this context describes a neural network that learns from interacting with its environment. In this case, image and other sensor samples are interpreted as interacting with the environment. Because Wheeler (US 20210172756 A1) discloses using a neural network, image and sensor samples, to create 3D maps to control a vehicle, the examiner respectfully maintains the rejection of the independent claims under 35 USC §103, obviousness.
The applicant has further contended, to the examiner’s best understanding, Wheeler (US 20210172756 A1) fails to disclose:
encoding third feature information associated with a time-series feature of the road graph from a series of the second feature information using a third encoder
as stated above, Wheeler (US 20210172756 A1) discloses creating, and refreshing/updating, HD maps containing road geometry, location of lanes, respective lane semantics, center lines, merging lanes, intersections, lane speed, lane trajectory, lane type, landmarks, speed bumps, curbs, signs, traffic lights, etc. These features are interpreted as first, second, third, fourth, and so on, features
based on a recurrent neural network
[0108] Embodiments create 3D planar objects from imagery and lidar information. Accordingly, the HD map system creates highly accurate 3D planar objects from one or more images and a sequence of one or more LiDAR scans of the area
[0121] To determine the 3D location of the sign, the HD map system determines the 3D geometry of the scene. Since a vehicle is scanning the world using LiDAR sensor(s), the HD map system efficiently and accurately creates a 3D representation of the image scene. Embodiments produce the scene from the LiDAR information using following techniques: (1.) using a single scan at the time of that the image was captured, (2.) aggregating aligned scans from before and after the image sample, and (3.) using the OMap (a 3D volumetric grid of occupied points built by fusing many sample runs through a region) … [0126] The row of the image indicates at what time that point was captured by the image such that the HD map system can shift it accordingly. This correction ensures that the 3D points correctly project onto the image
As stated above, the claimed first, second, and third encoders omnipresent in the claims are interpreted as actually being a single piece of hardware, specifically, processor 110 (instant specification [0060-0066] discloses that the three claimed encoders are processor 110). Further, Wheeler (US 20210172756 A1) discloses a plurality of modules to perform coding on a map (see at least figures 4A and 4B, total of 12 modules). Further still, to the examiner’s best understanding, a recurrent neural network is a neural network designed to process sequential or time-series data. The examiner respectfully submits, while Wheeler (US 20210172756 A1) does not explicitly recite the claimed terminology, Wheeler (US 20210172756 A1) discloses a neural network and processing sequential or time-series data ([0108], [0121-0126], also see updating maps). Because Wheeler (US 20210172756 A1) discloses 12 modules for coding a map, processing sequential or time-series data in a neural network to create 3D maps, the examiner respectfully maintains the rejection of the independent claims under 35 USC §103, obviousness.
In response to applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971). Further, Wheeler (US 20210172756 A1) and Johnston (US 20190285421 A1) are both directed to generating and updating a lane model representing navigable space or 3D map.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Martin (US 20210232913 A1)
Jun (KR 101951595 B1)
Ferencz (US 20150354976 A1)
Narayanan (US 20210148727 A1)
Lee (KR 20200084750 A)
Burnette (US 20130253753 A1)
Blaiotta (US 20210394784 A1)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARRON SANTOS whose telephone number is (571)272-5288. The examiner can normally be reached Monday - Friday: 8:00am - 4:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANGELA ORTIZ can be reached at (571) 272-1206. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.S./Examiner, Art Unit 3663
/ANGELA Y ORTIZ/Supervisory Patent Examiner, Art Unit 3663