DETAILED ACTION
Claim Objections
Claims 12-15 are objected to because of the following informalities: claims 12-15 should be dependent on claim 11, they are similar to claims 6-7 and 9-10 which are also dependent on claim 1.
Claims 17-20 are objected to because of the following informalities: claims 17-20 should be dependent on claim 16, they are similar to claims 6-7 and 9-10 which are also dependent on claim 1.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claim 1 recites “generating or updating a map layer, using one or more processors of a server, based in part on positioning data obtained from one or more edge devices and videos captured by the one or more edge devices; receiving, at the server, a traffic rule via a user dragging and dropping the traffic rule onto a roadway shown on an interactive map editor user interface; and generating or updating, using the one or more processors of the server, a traffic enforcement layer on top of the map layer, wherein the traffic rule is saved as part of the traffic enforcement layer”. This limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. Nothing in the claim precludes the generating and updating from practically being performed in the human mind. If a claim limitation under its broadest reasonable interpretation, covers performance of the limitation in the mind, then it falls within the “Mental Processes” groupings of abstract ideas. Accordingly, the claims recite an abstract idea.
Specifically, independent claim 1 recites “one or more processors of a server, edge devices” which are generic computing components and do not, by themselves, transform the claim into a non-abstract concept. And the user interface (dragging and dropping) is a way of presenting information, is generally considered generic or conventional. As such, the claim is directed solely to perform mental processes that fall into the “Mental Processes” groupings of abstract ideas and is directed to a judicial exception.
This judicial exception is not integrated into a practical application. the claim recites collecting data and saving updated traffic rule. Therefore, the claim does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to the abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Claims 2-10 recite limitations adding specific information to the abstract idea. The additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are not patent eligible.
Claims 11-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claim 11 recites “generating or updating a map layer, using one or more processors of a server, based in part on positioning data obtained from one or more edge devices and videos captured by the one or more edge devices; generating or updating, using the one or more processors of the server, a traffic enforcement layer on top of the map layer, wherein a plurality of traffic rules are saved as part of the traffic enforcement layer; and generating or updating, using the one or more processors of the server, a traffic insight layer, wherein the traffic insight layer is configured to adjust or provide a suggestion to adjust at least one of the traffic rules based on a change in a traffic throughput or flow determined by the traffic insight layer, and wherein adjusting or providing the suggestion to adjust one of the traffic rules further comprises not enforcing or providing a suggestion to not enforce one of the traffic rules”. This limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. Nothing in the claim precludes the generating and updating from practically being performed in the human mind. If a claim limitation under its broadest reasonable interpretation, covers performance of the limitation in the mind, then it falls within the “Mental Processes” groupings of abstract ideas. Accordingly, the claims recite an abstract idea.
Specifically, independent claim 11 recites “one or more processors of a server, edge devices” which are generic computing components and do not, by themselves, transform the claim into a non-abstract concept. As such, the claim is directed solely to perform mental processes that fall into the “Mental Processes” groupings of abstract ideas and is directed to a judicial exception.
This judicial exception is not integrated into a practical application. the claim recites collecting data and saving updated traffic rule. Therefore, the claim does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to the abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Claims 12-15 recite limitations adding specific information to the abstract idea. The additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are not patent eligible.
Claims 16-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claim 16 recites “generating or updating a map layer, using one or more processors of a server, based in part on positioning data obtained from one or more edge devices and videos captured by the one or more edge devices, wherein the map layer is generated or updated by passing the videos captured by at least one of the edge devices to a neural network running on the edge device and annotating the map layer with object labels outputted by the neural network; generating or updating, using the one or more processors of the server, a traffic enforcement layer on top of the map layer, wherein a plurality of traffic rules are saved as part of the traffic enforcement layer; and generating or updating, using the one or more processors of the server, a traffic insight layer, wherein the traffic insight layer is configured to adjust or provide a suggestion to adjust at least one of the traffic rules of the traffic enforcement layer based in part on traffic violations or traffic conditions determined by the one or more edge devices or the server”. This limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. Nothing in the claim precludes the generating and updating from practically being performed in the human mind. If a claim limitation under its broadest reasonable interpretation, covers performance of the limitation in the mind, then it falls within the “Mental Processes” groupings of abstract ideas. Accordingly, the claims recite an abstract idea.
Specifically, independent claim 16 recites “one or more processors of a server, edge devices, and using generic machine learning (neural networks)” which are generic computing components and do not, by themselves, transform the claim into a non-abstract concept. The Federal Circuit has held that merely applying generic machine learning to a new data environment (like traffic management) without claiming an improvement to the machine learning model itself is insufficient to avoid the abstract idea classification. As such, the claim is directed solely to perform mental processes that fall into the “Mental Processes” groupings of abstract ideas and is directed to a judicial exception.
This judicial exception is not integrated into a practical application. the claim recites collecting data and saving updated traffic rule. Therefore, the claim does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to the abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Claims 17-20 recite limitations adding specific information to the abstract idea. The additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are not patent eligible.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-9 of U.S. Patent No. 12,266,261. Although the claims at issue are not identical, they are not patentably distinct from each other because patent claim 1 and 3 teaches all of the limitations of instant claim 1.
Similarly, instant claim 2 is met by patent claim 3,
instant claim 3 is met by patent claim 4,
instant claim 4 is met by part of patent claim 1,
instant claim 5 is met by patent claim 6,
instant claim 6 is met by patent claim 8,
instant claim 7 is met by patent claim 5,
instant claim 8 is met by part of claim patent claim 1,
instant claim 9 is met by patent claim 7,
instant claim 10 is met by patent claim 9,
instant claim 11 is met by patent claim 1 and 6,
instant claim 12 is met by patent claim 2,
instant claim 13 is met by patent claim 7,
instant claim 14 is met by patent claim 8,
instant claim 15 is met by patent claim 9,
instant claim 16 is met by patent claim 1 and 9,
instant claim 17 is met by patent claim 2,
instant claim 18 is met by patent claim 7,
instant claim 19 is met by patent claim 8,
instant claim 20 is met by patent claim 9,
The patent claims include all of the limitations of the instant application claims, respectively. The patent claims also include additional limitations. Hence, the instant application claims are generic to the species of invention covered by the respective patent claims. As such, the instant application claims are anticipated by the patent claims and are therefore not patentably distinct therefrom. (See Eli Lilly and Co. v. Barr Laboratories Inc., 58 USPQ2D 1869, "a later genus claim limitation is anticipated by, and therefore not patentably distinct from, an earlier species claim", In re Goodman, 29 USPQ2d 2010, "Thus, the generic invention is 'anticipated' by the species of the patented invention" and the instant “application claims are generic to species of invention covered by the patent claim, and since without terminal disclaimer, extant species claims preclude issuance of generic application claims”).
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-10 of U.S. Patent No. 11,322,017. Although the claims at issue are not identical, they are not patentably distinct from each other because patent claim 1 and 3 teaches all of the limitations of instant claim 1.
Similarly, instant claim 2 is met by patent claim 3,
instant claim 3 is met by patent claim 4,
instant claim 4 is met by part of patent claim 1,
instant claim 5 is met by patent claim 7 and 8,
instant claim 6 is met by patent claim 9,
instant claim 7 is met by patent claim 5,
instant claim 8 is met by part of claim patent claim 6,
instant claim 9 is met by part of patent claim 1,
instant claim 10 is met by patent claim 10,
instant claim 11 is met by patent claim 1, 7, and 8,
instant claim 12 is met by patent claim 2,
instant claim 13 is met by part of patent claim 1,
instant claim 14 is met by patent claim 9,
instant claim 15 is met by patent claim 10,
instant claim 16 is met by patent claim 1 and 10,
instant claim 17 is met by patent claim 2,
instant claim 18 is met by part of patent claim 1,
instant claim 19 is met by patent claim 9,
instant claim 20 is met by patent claim 10,
The patent claims include all of the limitations of the instant application claims, respectively. The patent claims also include additional limitations. Hence, the instant application claims are generic to the species of invention covered by the respective patent claims. As such, the instant application claims are anticipated by the patent claims and are therefore not patentably distinct therefrom. (See Eli Lilly and Co. v. Barr Laboratories Inc., 58 USPQ2D 1869, "a later genus claim limitation is anticipated by, and therefore not patentably distinct from, an earlier species claim", In re Goodman, 29 USPQ2d 2010, "Thus, the generic invention is 'anticipated' by the species of the patented invention" and the instant “application claims are generic to species of invention covered by the patent claim, and since without terminal disclaimer, extant species claims preclude issuance of generic application claims”).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 11, 13, 15-16, 18, and 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Efland et al. (US 20210406559 A1).
In regard to claim 11, Efland teaches a method of managing traffic rules related to traffic enforcement (Efland, Fig. 2, the map 200 corresponds to given area of the real world and is made up of several different layers that each contain different information about the real-world environment), comprising: generating or updating a map layer (Efland, Fig. 2, Para. 60, the geometric map layer 202 may provide a representation of the real-world environment that is orders of magnitude more precise than the representation provided by the base map layer 201. For example, while the base map layer 201 may represent the location of the road network within the real-world environment at an approximately meter-level of precision, which is generally not sufficient to support autonomous vehicle operation, the geometric map layer 202 may be able to represent the location of the road network within the real-world environment at a centimeter-level of precision), using one or more processors of a server (Efland, Fig. 5, Para. 131, computing platform 500 may generally comprise any one or more computer systems (e.g., an on-board vehicle computing system and/or one or more off-board servers) that collectively include at least a processor 502, data storage 504, and a communication interface 506), based in part on positioning data obtained from one or more edge devices and videos captured by the one or more edge devices (Efland, Para. 68, beginning with the collection of sensor data for a given real-world environment, which could take various forms (e.g., image data, LiDAR data, GPS data, IMU data, etc.). The collected sensor data is fused together and processed in order to generate the geometric data for the high-resolution map); generating or updating, using the one or more processors of the server, a traffic enforcement layer on top of the map layer, wherein a plurality of traffic rules are saved as part of the traffic enforcement layer (Efland, Para. 61, Building from the geometric map layer 202, the map 200 may further include a semantic map layer 203 that includes data objects for semantic elements that are found within the real-world environment (i.e., “semantic objects”), which may be embedded with semantic metadata indicating information about such semantic elements. For example, the semantic map layer 203 may include semantic objects for lane boundaries, crosswalks, parking spots, stop signs, traffic lights and the like, each of which includes semantic metadata that provides information about the classification of the semantic element, the location of the semantic element, and perhaps also additional contextual information about the semantic element that can be used by a vehicle to drive safely and effectively); and generating or updating, using the one or more processors of the server, a traffic insight layer, wherein the traffic insight layer is configured to adjust or provide a suggestion to adjust at least one of the traffic rules based on a change in a traffic throughput or flow determined by the traffic insight layer (Efland, Para. 110, The computing system may also flag the given area of real-world environment 100 for re-evaluation and, at a later time, the given area may be re-evaluated. For example, vehicle 101 or another vehicle may capture sensor data including new 2D images of the given area that indicate that the previously detected traffic control elements are no longer present), and wherein adjusting or providing the suggestion to adjust one of the traffic rules further comprises not enforcing or providing a suggestion to not enforce one of the traffic rules (Efland, Para. 110, In response, the computing platform may revert the previous updates that were made to the real-time layer 205 (e.g., by pushing a command to revert the previous updates) such that the map may return to its original state; enforcing turn restriction with the turn restriction signs 103, 104 are removed).
In regard to claim 13, Efland teaches the method of claim 1, wherein each of the edge devices is coupled to a carrier vehicle and wherein at least part of the videos are captured while the carrier vehicle is in motion (Efland, Para. 70, the collected sensor data discussed herein may generally refer to sensor data captured by one or more sensor-equipped vehicles operating in the real-world environment and may take various forms).
In regard to claim 15, Efland teaches the method of claim 1, wherein the map layer is generated or updated by passing the videos captured by at least one of the edge devices to a neural network running on the edge device and annotating the map layer with object labels outputted by the neural network (Efland, Fig. 6, Para. 156, deriving the representation of the surrounding environment perceived by vehicle 600 using the raw data may involve detecting objects within the vehicle's surrounding environment, which may result in the determination of class labels, bounding boxes, or the like for each detected object. In this respect, the particular classes of objects that are detected by perception subsystem 602a (which may be referred to as “agents”) may take various forms, including both (i) “dynamic” objects that have the potential to move, such as vehicles, cyclists, pedestrians, and animals, among other examples, and (ii) “static” objects that generally do not have the potential to move, such as streets, curbs, lane markings, traffic lights, stop signs, and buildings, among other examples. Further, in practice, perception subsystem 602a may be configured to detect objects within the vehicle's surrounding environment using any type of object detection model now known or later developed, including but not limited object detection models based on convolutional neural networks (CNN)).
In regard to claim 16, Efland teaches a method of managing traffic rules related to traffic enforcement (Efland, Fig. 2, the map 200 corresponds to given area of the real world and is made up of several different layers that each contain different information about the real-world environment), comprising: generating or updating a map layer (Efland, Fig. 2, Para. 60, the geometric map layer 202 may provide a representation of the real-world environment that is orders of magnitude more precise than the representation provided by the base map layer 201. For example, while the base map layer 201 may represent the location of the road network within the real-world environment at an approximately meter-level of precision, which is generally not sufficient to support autonomous vehicle operation, the geometric map layer 202 may be able to represent the location of the road network within the real-world environment at a centimeter-level of precision), using one or more processors of a server (Efland, Fig. 5, Para. 131, computing platform 500 may generally comprise any one or more computer systems (e.g., an on-board vehicle computing system and/or one or more off-board servers) that collectively include at least a processor 502, data storage 504, and a communication interface 506), based in part on positioning data obtained from one or more edge devices and videos captured by the one or more edge devices (Efland, Para. 68, beginning with the collection of sensor data for a given real-world environment, which could take various forms (e.g., image data, LiDAR data, GPS data, IMU data, etc.). The collected sensor data is fused together and processed in order to generate the geometric data for the high-resolution map), wherein the map layer is generated or updated by passing the videos captured by at least one of the edge devices to a neural network running on the edge device and annotating the map layer with object labels outputted by the neural network (Efland, Fig. 6, Para. 156, deriving the representation of the surrounding environment perceived by vehicle 600 using the raw data may involve detecting objects within the vehicle's surrounding environment, which may result in the determination of class labels, bounding boxes, or the like for each detected object. In this respect, the particular classes of objects that are detected by perception subsystem 602a (which may be referred to as “agents”) may take various forms, including both (i) “dynamic” objects that have the potential to move, such as vehicles, cyclists, pedestrians, and animals, among other examples, and (ii) “static” objects that generally do not have the potential to move, such as streets, curbs, lane markings, traffic lights, stop signs, and buildings, among other examples. Further, in practice, perception subsystem 602a may be configured to detect objects within the vehicle's surrounding environment using any type of object detection model now known or later developed, including but not limited object detection models based on convolutional neural networks (CNN)); generating or updating, using the one or more processors of the server, a traffic enforcement layer on top of the map layer, wherein a plurality of traffic rules are saved as part of the traffic enforcement layer (Efland, Para. 61, Building from the geometric map layer 202, the map 200 may further include a semantic map layer 203 that includes data objects for semantic elements that are found within the real-world environment (i.e., “semantic objects”), which may be embedded with semantic metadata indicating information about such semantic elements. For example, the semantic map layer 203 may include semantic objects for lane boundaries, crosswalks, parking spots, stop signs, traffic lights and the like, each of which includes semantic metadata that provides information about the classification of the semantic element, the location of the semantic element, and perhaps also additional contextual information about the semantic element that can be used by a vehicle to drive safely and effectively); and generating or updating, using the one or more processors of the server, a traffic insight layer, wherein the traffic insight layer is configured to adjust or provide a suggestion to adjust at least one of the traffic rules of the traffic enforcement layer (Efland, Fig. 4A; Para. 109, the computing platform may effect updates to the real-time layer 203 by adding information for new semantic elements 103, 104, and 105, as depicted in the top-down view 115 showing a visualization of the updated map 115. In addition, the computing platform may update the real-time layer by adding an indication of the construction zone that is blocking traffic in the given lane, shown in the top-down view 115 as polygon 106) based in part on traffic violations or traffic conditions determined by the one or more edge devices or the server (Efland, Para. 106, In FIG. 4A, vehicle 101 may be a human-driven vehicle equipped with a sensor that captures image data, such as a monocular camera (e.g., a dashboard camera) that captures 2D image data. The 2D image data may include indications of the signs 103 and 104 and the barricade 105. Vehicle 101 may also capture GPS sensor data that may provide an approximation of the location of vehicle 101 within the given area of real-world environment 100).
In regard to claim 18, the claim is interpreted and rejected for the same reasons as stated in the rejection of claim 13 as stated above.
In regard to claim 20, the claim is interpreted and rejected for the same reasons as stated in the rejection of claim 15 as stated above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-5, 7-10, 12, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Efland et al. (US 20210406559 A1) in view of Monaci et al. (US 20210264339 A1).
In regard to claim 1, Efland teaches a method of managing traffic rules related to traffic enforcement (Efland, Fig. 2, the map 200 corresponds to given area of the real world and is made up of several different layers that each contain different information about the real-world environment), comprising: generating or updating a map layer (Efland, Fig. 2, Para. 60, the geometric map layer 202 may provide a representation of the real-world environment that is orders of magnitude more precise than the representation provided by the base map layer 201. For example, while the base map layer 201 may represent the location of the road network within the real-world environment at an approximately meter-level of precision, which is generally not sufficient to support autonomous vehicle operation, the geometric map layer 202 may be able to represent the location of the road network within the real-world environment at a centimeter-level of precision), using one or more processors of a server (Efland, Fig. 5, Para. 131, computing platform 500 may generally comprise any one or more computer systems (e.g., an on-board vehicle computing system and/or one or more off-board servers) that collectively include at least a processor 502, data storage 504, and a communication interface 506), based in part on positioning data obtained from one or more edge devices and videos captured by the one or more edge devices (Efland, Para. 68, beginning with the collection of sensor data for a given real-world environment, which could take various forms (e.g., image data, LiDAR data, GPS data, IMU data, etc.). The collected sensor data is fused together and processed in order to generate the geometric data for the high-resolution map); and generating or updating, using the one or more processors of the server, a traffic enforcement layer on top of the map layer, wherein the traffic rule is saved as part of the traffic enforcement layer (Efland, Para. 61, Building from the geometric map layer 202, the map 200 may further include a semantic map layer 203 that includes data objects for semantic elements that are found within the real-world environment (i.e., “semantic objects”), which may be embedded with semantic metadata indicating information about such semantic elements. For example, the semantic map layer 203 may include semantic objects for lane boundaries, crosswalks, parking spots, stop signs, traffic lights and the like, each of which includes semantic metadata that provides information about the classification of the semantic element, the location of the semantic element, and perhaps also additional contextual information about the semantic element that can be used by a vehicle to drive safely and effectively).
Efland teaches this initial set of semantic data may then undergo a human curation/validation stage during which human curators review and update the initial set of semantic data in order to ensure that it has a sufficient level of accuracy for use in a high-definition map (e.g., position information that is accurate at a centimeter-level) (Para. 68).
Efland does not specifically teach receiving, at the server, a traffic rule via a user dragging and dropping the traffic rule onto a roadway shown on an interactive map editor user interface.
However, Monaci teaches receiving, at the server, a traffic rule via a user dragging and dropping the traffic rule onto a roadway shown on an interactive map editor user interface (Monaci, Para. 89, Interactive editing for the schematic maps 140, 142 can include, for instance, receiving edit commands (selection, dragging, controls, or any other suitable interface command) (not shown) to select, move, enlarge, shrink, add, delete, customize (e.g., change line or shape color or configuration), label, etc. one or more components of the schematic map 140, 142).
Efland and Monaci are analogous art because they both pertain to generating map based on learned traffic situations.
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have editing capabilities to generate and modify maps in real time (as taught by Monaci) in order to ensure that map data has a sufficient level of accuracy for use in a semantic map.
In regard to claim 2, Combination of Efland and Monaci teach the method of claim 1, wherein the traffic rules comprises at least one of a rule type, a rule attribute, and a rule logic (Monaci, Para. 89, Interactive editing for the schematic maps 140, 142 can include, for instance, receiving edit commands (selection, dragging, controls, or any other suitable interface command) (not shown) to select, move, enlarge, shrink, add, delete, customize (e.g., change line or shape color or configuration), label, etc. one or more components of the schematic map 140, 142)..
In regard to claim 3, Combination of Efland and Monaci teach the method of claim 2, further comprising receiving the traffic rules in response to the user dragging and dropping at least one of the rule type, the rule attribute, and the rule logic onto a route point displayed over the roadway (Monaci, Fig. 4A; Para. 90; Para. 106, a user such as a transportation network operator can use the schedule editor module to edit spatial information, such as by creating, configuring, or updating one or more routes along transportation lines and their transport stops. Further, the user (or a different user) can edit the temporal information to reflect updated information. Spatial and temporal information can be edited in some embodiments using different interfaces, and the edited information provided via one interface can synchronized with the information viewed on the other interface).
In regard to claim 4, Combination of Efland and Monaci teach the method of claim 1, further comprising generating or updating a traffic insight layer, wherein the traffic insight layer is configured to adjust or provide a suggestion to adjust at least one of the traffic rules of the traffic enforcement layer (Efland, Fig. 4A; Para. 109, the computing platform may effect updates to the real-time layer 203 by adding information for new semantic elements 103, 104, and 105, as depicted in the top-down view 115 showing a visualization of the updated map 115. In addition, the computing platform may update the real-time layer by adding an indication of the construction zone that is blocking traffic in the given lane, shown in the top-down view 115 as polygon 106) based in part on traffic violations or traffic conditions determined by the one or more edge devices or the server (Efland, Para. 106, In FIG. 4A, vehicle 101 may be a human-driven vehicle equipped with a sensor that captures image data, such as a monocular camera (e.g., a dashboard camera) that captures 2D image data. The 2D image data may include indications of the signs 103 and 104 and the barricade 105. Vehicle 101 may also capture GPS sensor data that may provide an approximation of the location of vehicle 101 within the given area of real-world environment 100).
In regard to claim 5, Combination of Efland and Monaci teach the method of claim 4, wherein the traffic insight layer is further configured to adjust or provide the suggestion to adjust one of the traffic rules based on a change in a traffic throughput or flow determined by the traffic insight layer (Efland, Para. 110, The computing system may also flag the given area of real-world environment 100 for re-evaluation and, at a later time, the given area may be re-evaluated. For example, vehicle 101 or another vehicle may capture sensor data including new 2D images of the given area that indicate that the previously detected traffic control elements are no longer present), and wherein adjusting or providing the suggestion to adjust one of the traffic rules further comprises not enforcing or providing a suggestion to not enforce one of the traffic rules based on the change in the traffic throughput or flow (Efland, Para. 110, In response, the computing platform may revert the previous updates that were made to the real-time layer 205 (e.g., by pushing a command to revert the previous updates) such that the map may return to its original state; enforcing turn restriction with the turn restriction signs 103, 104 are removed).
In regard to claim 7, Combination of Efland and Monaci teach the method of claim 1, wherein updating the map layer further comprises receiving a semantic annotation via user inputs applied to the interactive map editor user interface (Monaci, Para. 135, Different and more granular ways of dynamically rendering the map 230 at different time instances can additionally or alternatively be used. Nonlimiting examples include: modifying colors; modifying line thickness or configuration (e.g., dashed, dotted, stippled, etc.); modifying one or more shapes; modifying transport stop markers; modifying labels for transport stops and/or transportation lines; creating movement in one or more indications (e.g., causing one or more lines or transport stops to repeatedly blink, shrink, enlarge, or fade); and/or extracting and regenerating completely different network visualizations for different times of the day).
In regard to claim 8, Combination of Efland and Monaci teach the method of claim 1, wherein generating or updating the traffic enforcement layer further comprises converting raw traffic rule data into the traffic rule (Efland, Para. 76-77, In this regard, block 302 may involve the evaluation of both raw sensor data as well as derived data that is based on the raw sensor data (e.g., a vectorized representation derived from the raw sensor data). Depending on the nature of the detected change, these operations may be performed on-vehicle, off-vehicle by a back-end computing platform that collects captured sensor data from a plurality of vehicles, or some combination of these. Further, the operations of evaluating the collected sensor data and detecting a change may take various forms, which may depend on the type of sensor data being evaluated and the type of evaluation being performed. these operations may involve detecting a new semantic element (e.g., a road barricade, a traffic signal, road sign, etc.) at a given area within the real-world environment that was not previously located at the given area).
In regard to claim 9, Combination of Efland and Monaci teach the method of claim 1, wherein each of the edge devices is coupled to a carrier vehicle and wherein at least part of the videos are captured while the carrier vehicle is in motion (Efland, Para. 70, the collected sensor data discussed herein may generally refer to sensor data captured by one or more sensor-equipped vehicles operating in the real-world environment and may take various forms).
In regard to claim 10, Combination of Efland and Monaci teach the method of claim 1, wherein the map layer is generated or updated by passing the videos captured by at least one of the edge devices to a neural network running on the edge device and annotating the map layer with object labels outputted by the neural network (Efland, Fig. 6, Para. 156, deriving the representation of the surrounding environment perceived by vehicle 600 using the raw data may involve detecting objects within the vehicle's surrounding environment, which may result in the determination of class labels, bounding boxes, or the like for each detected object. In this respect, the particular classes of objects that are detected by perception subsystem 602a (which may be referred to as “agents”) may take various forms, including both (i) “dynamic” objects that have the potential to move, such as vehicles, cyclists, pedestrians, and animals, among other examples, and (ii) “static” objects that generally do not have the potential to move, such as streets, curbs, lane markings, traffic lights, stop signs, and buildings, among other examples. Further, in practice, perception subsystem 602a may be configured to detect objects within the vehicle's surrounding environment using any type of object detection model now known or later developed, including but not limited object detection models based on convolutional neural networks (CNN)).
In regard to claim 12, Efland does not teach the method of claim 1, wherein generating or updating the traffic enforcement layer further comprises the server receiving at least some of the traffic rules via user inputs applied to an interactive map editor user interface.
However, Monaci teaches wherein generating or updating the traffic enforcement layer further comprises the server receiving at least some of the traffic rules via user inputs applied to an interactive map editor user interface (Monaci, Para. 135, Different and more granular ways of dynamically rendering the map 230 at different time instances can additionally or alternatively be used. Nonlimiting examples include: modifying colors; modifying line thickness or configuration (e.g., dashed, dotted, stippled, etc.); modifying one or more shapes; modifying transport stop markers; modifying labels for transport stops and/or transportation lines; creating movement in one or more indications (e.g., causing one or more lines or transport stops to repeatedly blink, shrink, enlarge, or fade); and/or extracting and regenerating completely different network visualizations for different times of the day).
Efland and Monaci are analogous art because they both pertain to generating map based on learned traffic situations.
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have editing capabilities to generate and modify maps in real time (as taught by Monaci) in order to ensure that map data has a sufficient level of accuracy for use in a semantic map.
In regard to claim 17, the claim is interpreted and rejected for the same reasons as stated in the rejection of claim 12 as stated above.
Claim 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Efland et al. (US 20210406559 A1) in view of Monaci et al. (US 20210264339 A1) and further in view of Dorne et a. (US 20190385453 A1).
In regard to claim 6, Combination of Efland and Monaci do not teach the method of claim 4, wherein generating or updating the traffic insight layer further comprises generating a heatmap of traffic violations detected by the one or more edge devices.
However, Dorne teaches comprises generating a heatmap of traffic violations detected by the one or more edge devices (Dorne, Para. 64, a user may view locations of events associated with law enforcement activities, traffic incidents (such as parking tickets, moving violations, and the like), parking information (such as in public parking lot, meter information), License Plate Number camera locations, and/or the like (such as emergency calls, and the like); Para. 71, the user may select and/or hover a cursor over any grouping area on the map interface to cause the map system to update the interactive heatmap to reflect the number of events within the selected grouping area. Also, the user may select and/or hover a cursor over an intersection in the interactive heatmap to cause the map system to update the grouping numbers in the map interface. For example, when the user selects the grouping 250 (indicating a grouping of 16 events) in the search area 118 in the map interface, the map system will update the interactive heatmap to reflect only the License Plate reads associated with the 16 instances and the search result list 202 will be updated as well. When the user selects the intersection 510 in the interactive heatmap, the map system will update the map interface to reflect only the groupings having events that are part of the selected time period).
Efland, Monaci, and Dorne are analogous art because they all pertain to interactive traffic information mapping system.
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have information regarding law enforcement events on the interactive vehicle information mapping system (as taught by Dorne) in order to allow for rapid and deep searching, retrieval, and/or analysis of various vehicle-related data, objects, features, and/or metadata by the user.
Claim 14 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Efland et al. (US 20210406559 A1) in view of Dorne et a. (US 20190385453 A1).
In regard to claim 14, Efland does not teach the method of claim 1, wherein generating or updating the traffic insight layer further comprises generating a heatmap of traffic violations detected by the one or more edge devices.
However, Dorne teaches comprises generating a heatmap of traffic violations detected by the one or more edge devices (Dorne, Para. 64, a user may view locations of events associated with law enforcement activities, traffic incidents (such as parking tickets, moving violations, and the like), parking information (such as in public parking lot, meter information), License Plate Number camera locations, and/or the like (such as emergency calls, and the like); Para. 71, the user may select and/or hover a cursor over any grouping area on the map interface to cause the map system to update the interactive heatmap to reflect the number of events within the selected grouping area. Also, the user may select and/or hover a cursor over an intersection in the interactive heatmap to cause the map system to update the grouping numbers in the map interface. For example, when the user selects the grouping 250 (indicating a grouping of 16 events) in the search area 118 in the map interface, the map system will update the interactive heatmap to reflect only the License Plate reads associated with the 16 instances and the search result list 202 will be updated as well. When the user selects the intersection 510 in the interactive heatmap, the map system will update the map interface to reflect only the groupings having events that are part of the selected time period).
Efland and Dorne are analogous art because they all pertain to interactive traffic information mapping system.
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have information regarding law enforcement events on the interactive vehicle information mapping system (as taught by Dorne) in order to allow for rapid and deep searching, retrieval, and/or analysis of various vehicle-related data, objects, features, and/or metadata by the user.
In regard to claim 19, the claim is interpreted and rejected for the same reasons as stated in the rejection of claim 14 as stated above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHARMIN AKHTER whose telephone number is (571)272-9365. The examiner can normally be reached on Monday - Thursday 8:00am-5:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Davetta W Goins can be reached on (571) 272.2957. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHARMIN AKHTER/
Examiner, Art Unit 2689