DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This Office action is in response to the amendments filed on February 13, 2026. Claims 1-25 are currently pending, with Claim 19 being amended.
Response to Amendments
In response to Applicant’s amendments, filed February 13, 2026, the Examiner withdraws the previous claims objections, and maintains the previous 35 U.S.C. 102 and 103 rejections.
Response to Arguments
Applicant's arguments filed February 13, 2026, have been fully considered but they are not persuasive.
Regarding Applicant’s arguments pertaining to “using a first modality …” (see page 6-7 of instant arguments), the Examiner is unpersuaded. Anderson teaches that multiple types of sensor sets provide data for localization, and each sensor set may be a different type of sensor (i.e., lidar, camera, sonar, etc.), where one set of sensors is used to perform localization, and another is used when the second set performs better for a given environment or operating conditions (see at least Paragraphs [0045], [0063]; Figures 5, 8, 13 of Anderson). When discussing modalities for localizing a system, different sensor modalities are used to determine an accurate position of the vehicle within the environment. The sensor data can be fused if necessary to optimize the sensor results, but Anderson also teaches that each type of sensor modality can be used, processed for accuracy and for determining the best type of sensor to use for a given condition before any data is fused together. In other words, Anderson teaches that the best type of sensor is used, and when the data is not good for any one type of sensor, fusing the data to improve the results. Anderson discloses that the best sensor type is determined for the operating conditions, and then a second is used, if it provides better data than the first. As such, the Examiner is unpersuaded and maintains the corresponding rejections.
Regarding Applicant’s argument pertaining to “selectively disregard and/or disable …” (see page 7 of instant arguments), the Examiner is unpersuaded. Anderson teaches that a sensor may be selected such that is always capable of sensing information needed to operate the vehicle, and that the system adjusts/selects which sensors to activate based on the operating conditions. When Anderson is selecting which sensors to activate, it is excluding others which do not provide sensor data corresponding to the preset operating conditions for that type of sensor, or excludes sensors which are providing faulty data, or are less accurate (see at least Paragraphs [0052], [0074], [0080], [0089] of Anderson). Anderson teaches that the system disregards using sensor data for which the sensors are not reliable in a given environment, by selecting the use of sensors which provide the most accurate data. As such, the Examiner is unpersuaded and maintains the corresponding rejections.
Regarding Applicant’s arguments pertaining to the use of multiple localization map layers (see pages 7-8 of instant arguments), the Examiner is unpersuaded. Anderson teaches that localization data can come from a map database, which contains information about the static environment (see at least Paragraphs [0034], [0042], [0068] of Anderson). A map database of the static environment is a fundamental map layer for localizing the robot by providing reference data, and then using that to compare/ add/ change the map information based on sensor data obtained from lidar or camera observations. Anderson further teaches that thematic data can be applied to the map to form a secondary map layer reference (see at least Paragraphs [0060]-[0061] of Anderson). Afrouzi further teaches that the map layers can be generated in real-time, compared to a map database, and aligned with a common coordinate system (see at least Col. 22 lines 27-33, Col 54 lines 12-22, Col. 59 lines 19-20 of Afrouzi). Anderson, in view of Afrouzi, teaches that the map is created in layers (using historical, current and real-time observations) and updated so as to control the vehicle. As such, the Examiner is unpersuaded and maintains the corresponding rejections.
The remaining arguments are essentially the same as those addressed above and/or below and are unpersuasive for essentially the same reasons. Therefore, the corresponding rejections are maintained.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-4, 6, 8-12, 14, 17-19, and 22-25 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. Patent Publication No. 2010/0063651 A1, to Anderson (hereinafter referred to as Anderson).
As per Claim 1, Anderson discloses the features of a vehicle localization system (e.g. Paragraph [0025]; where a method and system is provided for utilizing a versatile robotic control module for localization and navigation of a vehicle), comprising:
a robotic vehicle configured (e.g. Paragraph [0025]; where a method and system is provided for utilizing a versatile robotic control module for localization and navigation of a vehicle) to
navigate within an environment (e.g. Paragraphs [0029], [0031]; where the vehicle navigates an environment using a path mapping module), based, at least in part,
on a predetermined environmental map (e.g. Paragraphs [0068], [0078]; where the knowledge base (702) contains static information about the operating environment of a vehicle, where the fixed map may include streets, structures, trees, and other static objects in the environment);
a first exteroceptive sensor, the first exteroceptive sensor coupled to the robotic vehicle and configured to produce a first data stream (e.g. Paragraphs [0052], [0055], [0057], [0062]; where the vehicle sensor system (500) on the vehicle includes a camera);
a second exteroceptive sensor, the second exteroceptive sensor coupled to the robotic vehicle and configured to produce a second data stream (e.g. Paragraphs [0052], [0061]-[0062]; where the vehicle sensor system (500) on the vehicle includes lidar); and
a processor (e.g. Paragraphs [0046], [0059]; Figure 6; where the machine controller may utilize a data processing system, sensor processing algorithms, and a knowledge base or behavior library in storage) configured to:
localize the robotic vehicle within the environment (e.g. Paragraphs [0040], [0042], [0063]; where the sensor sets provide data for localization)
using a first modality based on the first data stream and a second modality based on the second data stream (e.g. Paragraph [0056]; where the sensor system (500) may retrieve environmental data from one or more sensors to obtain different perspectives of the environment, such as from a camera and a lidar (i.e., where the first modality can be a camera or lidar)); and
selectively disregard and/or disable one of the first modality or the second modality (e.g. Paragraphs [0051]-[0052], [0058], [0062]; where the sensor system (500) may include redundancy sensors that may be used to compensate for the loss and/or inability of another sensor to obtain the needed information to control the vehicle (i.e. disregards corrupted sensor data); and where sensor may be selected such that one or the sensors is always capable of sensing information needed to operate the vehicle) to
localize the robotic vehicle within the environment using a subset of localization modalities (e.g. Paragraph [0058], [0074]; Figures 8-9; where if a sensor fails, the secondary sensor will activate; and where the system determines whether the sensor data from the environment does not correspond to the preset operating conditions, the process selects the sensors to activate).
As per Claim 2, Anderson discloses the features of Claim 1, and Anderson further discloses the features of wherein the vehicle is a ground vehicle (e.g. Paragraph [0030]; where the vehicle may be an automobile, truck, harvester, tractor, mower, armored vehicle, or utility vehicle).
As per Claim 3, Anderson discloses the features of Claim 1, and Anderson further discloses the features of wherein the first exteroceptive sensor comprises one or more cameras (e.g. Paragraphs [0052], [0055], [0057], [0062]; where the vehicle sensor system (500) on the vehicle includes a camera).
As per Claim 4, Anderson discloses the features of Claim 1, and Anderson further discloses the features of wherein the second exteroceptive sensor comprises a LiDAR (e.g. Paragraphs [0052], [0061]-[0062]; where the vehicle sensor system (500) on the vehicle includes lidar).
As per Claim 6, Anderson discloses the features of Claim 1, and Anderson further discloses the features of wherein the processor is further configured to localize the vehicle without adding infrastructure to the environment (e.g. Paragraphs [0040], [0078]; where multiple sensors are located on multiple vehicles to perform localization (i.e. no structure is added to infrastructure elements outside the vehicles themselves)).
As per Claim 8, Anderson discloses the features of Claim 1, and Anderson further discloses the features of wherein the processor is further configured to disregard and/or disable the first or second localization modality in response to an absence of visual features in the operational environment (e.g. Paragraphs [0062], [0069], [0072], [0074], [0076]; where the system identifies the operating conditions in the environment through sensor data received form a sensor system, and determines if the sensor data does not correspond to the preset operating conditions in the sensor table, the system selects the sensors to activate; and where the system may determine which sensors to activate based on a condition of rain, snow, fog, and frost which may limit the vision or range of certain sensors).
As per Claim 9, Anderson discloses the features of Claim 1, and Anderson further discloses the features of wherein the processor is further configured to disregard and/or disable the first or second localization modality in response to an absence of geometric features (e.g. Paragraphs [0072]-[0073]; where static landmarks are identified using camera images in normal operating conditions, and where in winter, camera images may be unusable, and lidar detection may be implemented to determine the operating environment around the vehicle; and where the worksite may have few fixed visual landmarks, and may employ a combination of lidar sensors to determine the environment).
As per Claim 10, Anderson discloses the features of Claim 1, and Anderson further discloses the features of wherein the processor is further configured selectively disregard and/or disable the first or second localization modality to support vehicle navigation both on and off a pre-trained path (e.g. Paragraphs [0034], [0039], [0042], [0078]; where the learned knowledge base contains knowledge learned as the vehicle spends time in a specific work area, and the system may reference the knowledge base to select which sensors for use in planning paths; and where the system accesses map data and the a priori knowledge base for path mapping, and determines if deviation from the path is necessary due to an obstacle (i.e. off a planned path), and the system determines the detection range for sensors offering good visibility of terrain in the path of the vehicle, and when the vehicle experiences diminished detection range, the system selects which sensors to activate).
As per Claim 11, Anderson discloses the features of Claim 1, and Anderson further discloses the features of wherein the processor is further configured to generate a first map layer associated with the first data stream and to register a localization of the robotic vehicle to the first map layer based on the first data stream (e.g. Paragraph [0034], [0042], [0068], [0078]-[0079]; where the knowledge base includes an a priori knowledge base, an online knowledge base, and a learned knowledge base, which contains static information about the operating environment of the vehicle, and may contain fixed work-site information; and where the static map of the environment can be retrieve from the a priori knowledge base, and where the system receives sensor data and used to localize the vehicle).
As per Claim 12, Anderson discloses the features of Claim 11, and Anderson further discloses the features of wherein the first map layer is pre-computed offline (e.g. Paragraph [0034], [0068], [0078]-[0079]; where the knowledge base includes an a priori knowledge base, an online knowledge base, and a learned knowledge base, which contains static information about the operating environment of the vehicle, and may contain fixed work-site information; and where the different paths may be mapped prior to reaching the field; and where the static map of the environment can be retrieve from the a priori knowledge base, and where the system receives sensor data and classifies and populates the static map with the objects, where the sensor data can receive and store camera images in the a priori knowledge base).
As per Claim 14, Anderson discloses the features of Claim 11, and Anderson further discloses the features of wherein the processor is further configured to generate a second map layer associated with the second data stream and to register a localization of the robotic vehicle to the second map layer based on the second data stream (e.g. Paragraphs [0060]-[0061]; where sensor data is received and classified into thematic features to generate a thematic map (i.e. a second map layer), which may contain a spatial pattern of attributes; and where the sensor processing algorithms receive data from a laser range finder, such as a lidar to identify points in the environment).
As per Claim 17, Anderson discloses the features of Claim 14, and Anderson further discloses the features of wherein the second map layer is ephemeral (e.g. Paragraphs [0046], [0070]; where the storage device can store information temporarily; and where the learned knowledge base may temporarily change the environmental data associated with the work area to reflect the new absence of a tree).
As per Claim 18, Anderson discloses the features of Claim 14, and Anderson further discloses the features of wherein the processor is configured to dynamically update the second map layer (e.g. Paragraph [0069]; where the online knowledge base may dynamically provide information to a machine control process which enables adjustment to sensor data processing, site-specific sensor accuracy calculations, and/or exclusion of sensor information, and may include current weather conditions of the operating environment).
As per Claim 19, Anderson discloses the features of Claim 14, and Anderson further discloses the features of wherein the processor is configured to spatially register the second map layer is to the first map layer (e.g. Paragraphs [0060]-[0061], [0079]; where sensor data is received and classified into thematic features to generate a thematic map, which may contain a spatial pattern of attributes; and where the sensor processing algorithms receive data from a laser range finder, such as a lidar to identify points in the environment; and where the process retrieves a static map (i.e. first map layer) and then populates the map with the detected objects to form a thematic map (i.e. a second map layer is correlated to the first map layer)).
As per Claim 22, Anderson discloses the features of Claim 1, and Anderson further discloses the features of wherein the processor is further configured to perform context-aware modality switching (e.g. Paragraphs [0062], [0069], [0072], [0074], [0076]; where the system identifies the operating conditions in the environment through sensor data received form a sensor system, and determines if the sensor data does not correspond to the preset operating conditions in the sensor table, the system selects the sensors to activate; and where the system may determine which sensors to activate based on a condition of rain, snow, fog, and frost which may limit the vision or range of certain sensors (i.e. context)).
As per Claim 23, Anderson discloses the features of Claim 1, and Anderson further discloses the features of wherein the processor is further configured to
prioritize one of the first or the second localization modality (e.g. Paragraphs [0051]-[0052], [0058], [0062]; where the sensor system (500) may include redundancy sensors that may be used to compensate for the loss and/or inability of another sensor to obtain the needed information to control the vehicle; and where sensor may be selected such that one or the sensors is always capable of sensing information needed to operate the vehicle) to
localize the robotic vehicle based on one or more factors related to time, space, and/or robotic vehicle action (e.g. Paragraphs [0037], [0045], [0068], [0070], [0078]; where the knowledge base contains information about the operating environment for specific times of the year and based on the amount of time a vehicle spends in a specific work area; and where the system detects a dynamic condition that impacts the movement of the vehicle, such as moving the vehicle to a new location, detection of an obstacle, etc., and the vehicle uses the information collected by the sensor system to identifying a location of a vehicle and conduct localization by retrieving a map associated with the location of the vehicle (i.e. time and location information)).
As per Claim 24, Anderson discloses the features of Claim 1, and Anderson further discloses the features of wherein the processor is further configured to
prioritize one of the first or the second localization modality (e.g. Paragraphs [0051]-[0052], [0058], [0062]; where the sensor system (500) may include redundancy sensors that may be used to compensate for the loss and/or inability of another sensor to obtain the needed information to control the vehicle; and where sensor may be selected such that one or the sensors is always capable of sensing information needed to operate the vehicle) to
localize the robotic vehicle based on pre-trained explicit annotations (e.g. Paragraph [0064], [0068], [0071]-[0072]; where the knowledge base contains information about the operating environment, including streets, structures, tree locations, etc., which may be used to plan actions, and the information can be used to classify and assign attributes to the identified objects in the environment, such as classifying an item as a “telephone pole”; and the sensors may be used for localization based on the sensor that is activated).
As per Claim 25, Anderson discloses the features of Claim 1, and Anderson further discloses the features of wherein the processor is further configured to
prioritize the first or the second localization modality (e.g. Paragraphs [0051]-[0052], [0058], [0062]; where the sensor system (500) may include redundancy sensors that may be used to compensate for the loss and/or inability of another sensor to obtain the needed information to control the vehicle; and where sensor may be selected such that one or the sensors is always capable of sensing information needed to operate the vehicle) to
localize the robotic vehicle based on one or more specified time(s), time(s) of day, and/or locations (e.g. Paragraphs [0037], [0045], [0068], [0070], [0078]; where the knowledge base contains information about the operating environment for specific times of the year and based on the amount of time a vehicle spends in a specific work area; and where the system detects a dynamic condition that impacts the movement of the vehicle, such as moving the vehicle to a new location, detection of an obstacle, etc., and the vehicle uses the information collected by the sensor system to identifying a location of a vehicle and conduct localization by retrieving a map associated with the location of the vehicle (i.e. time and location information)).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 5, 7, 13, 15-16, and 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication No. 2010/0063651 A1, to Anderson (hereinafter referred to as Anderson), in view of U.S. Patent No. 11,274,929 B1, to Afrouzi, et al (hereinafter referred to as Afrouzi).
As per Claim 5, Anderson discloses the features of Claim 1, but Anderson fails to disclose every feature of further comprising: a first proprioceptive sensor, the first proprioceptive sensor being coupled to the vehicle and being configured to produce a third data stream, the processor being configured to localize the robotic vehicle using a third modality based on the third data stream in combination with the first modality or the second modality.
However, Afrouzi, in a similar field of endeavor, teaches a method for constructing a map while performing work, where the data may be collected using a proprioceptive sensor and an exteroceptive sensor, where the processor may use day from one or both of the proprioceptive and exteroceptive sensors to generate or update the map; where the data may be collected using a proprioceptive sensor and an exteroceptive sensor, where the processor may use day from one or both of the proprioceptive and exteroceptive sensors to generate or update the map; and where the processor may receive and process data from internal or externa sensors to localize the robot (e.g. Col. 8 lines 19-24; Col. 52 lines 17-25).
It would have been obvious to a person of ordinary skill in the art on or before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify machine localization system of Anderson, with the feature of using different types sensors to localize a vehicle in the system of Afrouzi, in order to improve the accuracy of the map (see at least Col. 82 lines 16-21 of Afrouzi).
As per Claim 7, Anderson discloses the features of Claim 1, and Anderson further discloses the features of wherein the processor is further configured to selectively disregard and/or disable the first localization modality or the second localization modality ‘…’ in response to a change in an operational environment as compared to the predetermined environmental map (e.g. Paragraphs [0062], [0074], [0076]; where the system identifies the operating conditions in the environment through sensor data received form a sensor system, and determines if the sensor data does not correspond to the preset operating conditions in the sensor table, the system selects the sensors to activate).
Anderson fails to disclose every feature of selectively disregard and/or disable the first localization modality or the second localization modality in real-time.
However, Afrouzi, in a similar field of endeavor, teaches a method for constructing a map while performing work, where the map may be generated in real-time, and may determine a choice or state or behavior based on agreement or disagreement between more than one sensor, and may ignore data a sensor reads when it is not consistent with the preceding data (e.g. Col. 10 lines 34-42; Col. 54 lines 12-22; Col. 95 line 60- Col. 96 line 2; Col. 106 lines 56-59).
It would have been obvious to a person of ordinary skill in the art on or before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify machine localization system of Anderson, with the feature of determining map data in real-time in the system of Afrouzi, in order to determine and implement a more efficient path as the robot is traveling (see at Col. 96 lines 17-24, Col. 111 lines 27-33 of Afrouzi).
As per Claim 13, Anderson discloses the features of Claim 11, but Anderson fails to disclose every feature of wherein the first map layer is generated during a training mode.
However, Afrouzi, in a similar field of endeavor, teaches a method for constructing a map while performing work, where a training period of the robot may include the robot inspecting the environment various times with the same sensor, and the training may occur over one or multiple sessions, to generate a first map layer (e.g. Col. 55 lines 13-59).
It would have been obvious to a person of ordinary skill in the art on or before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify machine localization system of Anderson, with the feature of generating a map during a training period in the system of Afrouzi, in order to validate the data of the first map (see at Col. 55 lines 13-17 of Afrouzi).
As per Claim 15, Anderson discloses the features of Claim 14, but Anderson fails to disclose every feature of wherein the second map layer is computed in real time.
However, Afrouzi, in a similar field of endeavor, teaches a method for constructing a map while performing work, where the map may be generated in real-time (e.g. Col. 59 lines 19-20; Col. 54 lines 12-22; Col. 95 line 60- Col. 96 line 2).
It would have been obvious to a person of ordinary skill in the art on or before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify machine localization system of Anderson, with the feature of determining map data in real-time in the system of Afrouzi, in order to determine and implement a more efficient path as the robot is traveling (see at Col. 96 lines 17-24, Col. 111 lines 27-33 of Afrouzi).
As per Claim 16, Anderson discloses the features of Claim 14, but Anderson fails to disclose every feature of wherein the second map layer is generated during robotic vehicle operation.
However, Afrouzi, in a similar field of endeavor, teaches a method for constructing a map while performing work, where the robot maps an area while performing work; and where the processor of the robot may construct a map of the environment using data from one or more sensors while the robot performs work within recognized areas of the environment (e.g. Col. 21 lines 33-34; Col. 22 lines 60-62).
It would have been obvious to a person of ordinary skill in the art on or before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify machine localization system of Anderson, with the feature of determining map data in while the robot is operating in the system of Afrouzi, in order to determine and implement a more efficient path as the robot is traveling (see at Col. 96 lines 17-24, Col. 111 lines 27-33 of Afrouzi).
As per Claim 20, Anderson discloses the features of Claim 14, but Anderson fails to disclose every feature of wherein the processor is further configured to spatially register the first map layer and the second map layer to a common coordinate frame.
However, Afrouzi, in a similar field of endeavor, teaches a method for constructing a map while performing work, where the processor can transform the vectors measured relative to different coordinate systems and describing the environment to be transformed into a single coordinate system (e.g. Col. 34 lines 59-63).
It would have been obvious to a person of ordinary skill in the art on or before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify machine localization system of Anderson, with the feature of utilizing a common coordinate system in the system of Afrouzi, in order to establish a more accurate map (see at Col. 21 lines 6-10, Col. 35 lines 42-49 of Afrouzi).
As per Claim 21, Anderson discloses the features of Claim 20, and Anderson further discloses the features of wherein the processor is configured to spatially register semantic annotations to the first map layer (e.g. Paragraphs [0060]-[0061], [0079]; where sensor data is received and classified into thematic features to generate a thematic map, which may contain a spatial pattern of attributes; and where the sensor processing algorithms receive data from a laser range finder, such as a lidar to identify points in the environment; and where the process retrieves a static map (i.e. first map layer) and then populates the map with the detected objects to form a thematic map (i.e. a second map layer is correlated to the first map layer)).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.:
Dalal, et al (U.S. 12,433,463 B2), which teaches a method mapping an environment around a vehicle.
Karlsson (U.S. 2005/0234679 A1), which teaches a method for selective integration of sensor data for a robot.
Moustafa, et al (U.S. 2022/0126864 A1), which teaches a method for navigating a robot, and disabling sensors whose data is corrupted while using remaining sensors to navigate.
Nehmadi, et al (U.S. 2022/0398851 A1), which teaches a system for receiving sensor data from a plurality of sensor modalities, and disregarding a sensor modality to avoid corrupting perception results.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MERRITT LEVY whose telephone number is (571)270-5595. The examiner can normally be reached Mon-Fri 0630-1600.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Flynn can be reached at (571) 272-9855. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MERRITT LEVY/Examiner, Art Unit 3663
/ABBY J FLYNN/Supervisory Patent Examiner, Art Unit 3663