Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objection
Claims 1-6 are objected because of the following reason:
Regarding claims 1, claim recites lines 1-2, “METHOD FOR DETECTING THE LOCATION AND SPEED OF A RAILWAY VEHICLE which, applicable as an additional system to the systems currently existing in railway vehicles”. Because this is in incorrect English format and the location and speed lack antecedent. This limitation should recite “A method for detecting a location and speed of a railway vehicle which, applicable as an additional system to the systems currently existing in railway vehicles”.
The Examiner request also correct other antecedent basis issues in claim 1 and other claims 2-6. Proper correction is required.
Regarding claim 1, claim recites lines 6-9 “processing of the information of the optical signal (1a) captured by the camera or cameras 10 (1) by means of optical processors (3a, 3b, 3c), based on artificial intelligence technologies and deep neural networks, capable of determining the speed and location of the train”. Because of the phrase capable of determining may raise the issue of vague and indefinite. This limitation should recite processing of the information of the optical signal (1a) captured by the camera or cameras 10 (1) by means of optical processors (3a, 3b, 3c), based on artificial intelligence technologies and deep neural networks, to determine the speed and location of the train”. Proper correction is required.
Regarding claim 2, claim recites “METHOD FOR DETECTING THE LOCATION AND SPEED OF A RAILWAY VEHICLE according to claim 1 characterized ”. Because this is in incorrect English format and claim is dependent on 1 therefore it should recite “The method for detecting the location and speed of the railway vehicle according to claim 1 further characterized”. Proper correction is required.
Regarding claim 3, claim recites “METHOD FOR DETECTING THE LOCATION AND SPEED OF A RAILWAY VEHICLE according to claim 1 or 2, characterized”. Because this is in incorrect English format and claim is dependent on 1 therefore it should recite “A method for detecting the location and speed of the railway vehicle according to claim 1 further characterized”. Furthermore claim 3 should be on dependent only claim 1 because it is further limiting the limitation of claim 1. Proper correction is required.
Regarding claim 4, claim recites lines 1-2, “EQUIPMENT FOR DETECTING THE LOCATION AND SPEED OF A RAILWAY VEHICLE which, applicable as an additional system to the systems currently existing in railway vehicles”. Because this is in incorrect English format and the location and speed lack antecedent basis. This limitation should recite “An equipment for detecting a location and speed of a railway vehicle which, applicable as an additional system to the systems currently existing in railway vehicles”. The Examiner also suggest to recite “A system” instead of “An equipment”. The Examiner also request to correct other antecedent basis issues in claim 4 and other claims 5-6. Proper correction is required.
Regarding claim 4, claim recites lines 6-9 “hardware provided with optical processor chips (3a, 3b, 3c), based on artificial intelligence technologies and deep neural networks capable of determining the speed and location of the train”. Because of the phrase capable of determining may raise the issue of vague and indefinite. This claim should recite “hardware provided with optical processor chips based on artificial intelligence technologies and deep neural networks to determine the speed and location of the train”. Proper correction is required.
Regarding claim 5, claim recites “EQUIPMENT FOR DETECTING THE LOCATION AND SPEED OF A RAILWAY VEHICLE according to claim 4 characterized”. Because this is in incorrect English format and claim is dependent on 4 therefore it should recite “The equipment for detecting the location and speed of the railway vehicle according to claim 4 further characterized”. Proper correction is required.
Regarding claim 6, claim recites “EUIMENT FOR DETECTING THE LOCATION AND SPEED OF A RAILWAY VEHICLE according to claim 4 or 5, characterized”. Because this is in incorrect English format and claim is dependent on 4 therefore it should recite “A method for detecting the location and speed of the railway vehicle according to claim 4 further characterized”. Furthermore claim 6 should be only dependent on claim 4 because it is further limiting the limitation of claim 4. Proper correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1-6 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The limitations, under their broadest reasonable interpretation, cover mental process (concept performed in a human mind, including as observation, evaluation, judgment, opinion, organizing human activity and solving mathematical concepts and calculations). The claim(s) recite(s) Method/Equipment for detecting the location and speed of a railway vehicle. This judicial exception is not integrated into a practical application because the steps do not add meaningful limitations to be considered specifically applied to a particular technological problem to be solved . The claims 1-6 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the steps of the claimed invention can be done mentally using paper/pencil, solving mathematical problem and no additional features in the claims would preclude them from being performed as such except for the generic computer elements and generic image sensors at high level of generality (i.e., processor, memory and cameras) .
According to the USPTO guidelines, a claim is directed to non-statutory subject matter if:
STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or
STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis:
STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon?
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
Using the two-step inquiry, it is clear that claims 1, 5 and 6 are directed to an abstract idea as shown below:
Regarding independent claim 1and 4
STEP 1: Do the claims fall within one of the statutory categories?
YES.
Claims 1 and 4 are directed to a method and equipment
i.e. process and system.
STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?
YES.
The claims are directed toward a mental process and solving mathematical problem (i.e. abstract idea).
With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas:
Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations;
Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and
Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion).
Claims 1 and 4 comprise a mental process that can be practicably performed in the human mind for solving mathematical problem (or generic computers or components and generic image sensors configured to perform the method) and, therefore, an abstract idea.
Regarding Claim(s) 1 and 4: (representative claim 1)
METHOD FOR DETECTING THE LOCATION AND SPEED OF A RAILWAY VEHICLE which, applicable as an additional system to the systems currently existing in railway vehicles, is characterized by comprising, at least, the following steps:
- capturing of one imagen of the track (6) and the passive elements (5) existing thereon, by means of one or more video cameras (1) installed in the front of the train (2) (collecting video/image data and visualizing video/image data in front of the train, i.e. data collection is insignificant extra solution activity and mental process of visualizing the collected data);
- processing of the information of the optical signal (1a) captured by the camera or cameras 10 (1) by means of optical processors (3a, 3b, 3c), based on artificial intelligence technologies and deep neural networks, capable of determining the speed and location of the train (2) (mentally processing the collected video/image data using human intelligence to evaluate and estimate using paper pencil and solving mathematical relationship to determine the location and speed of the train);
- sending the processed location and speed information (i) to at least one control center (7): - representation on a screen (4) of the processed location and speed information (i) of the train (2) (providing the determined location and speed to control center which is mental process and insignificant post solution activity).
.
The above limitations, as drafted, is a simple process that, under their broadest reasonable interpretation, covers performance of the limitations in the mind by a human intelligence and solving mathematical process. Furthermore limitations “METHOD FOR DETECTING THE LOCATION AND SPEED OF A RAILWAY VEHICLE which, applicable as an additional system to the systems currently existing in railway vehicles, is characterized by comprising, at least, the following steps: - capturing of one imagen of the track (6) and the passive elements (5) existing thereon, by means of one or more video cameras (1) installed in the front of the train (2) (collecting video/image data and visualizing video/image data in front of the train, i.e. data collection is insignificant extra solution activity and mental process of visualizing the collected data); - processing of the information of the optical signal (1a) captured by the camera or cameras 10 (1) by means of optical processors (3a, 3b, 3c), based on artificial intelligence technologies and deep neural networks, capable of determining the speed and location of the train (2) (mentally processing the collected video/image data using human intelligence to evaluate and estimate using paper pencil and solving mathematical relationship to determine the location and speed of the train); - sending the processed location and speed information (i) to at least one control center (7): - representation on a screen (4) of the processed location and speed information (i) of the train (2) (providing the determined location and speed to control center which is mental process and insignificant post solution activity)” are insignificant.
The Examiner notes that under MPEP 2106.04(A) (2) (III), the courts consider a mental process (thinking, human intelligence) that can be performed in the mind/intelligence using a paper and pencil to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[Mental processes and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978).
Furthermore the Examiner also notes that even if you combined the math with the mental process, a combination of abstract ideas don't make a claim eligible. See MPEP 2106.04(II)(A)(2): Because a judicial exception is not eligible subject matter, Bilski, 561 U.S. at 601, 95 USPQ2d at 1005-06 (quoting Chakrabarty, 447 U.S. at 309, 206 USPQ at 197 (1980)), if there are no additional claim elements besides the judicial exception, or if the additional claim elements merely recite another judicial exception, that is insufficient to integrate the judicial exception into a practical application. See, e.g., RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1327, 122 USPQ2d 1377 (Fed. Cir. 2017) ("Adding one abstract idea (math) to another abstract idea (encoding and decoding) does not render the claim non-abstract").
Other than generic and well-known computer hardware i.e., processors and generic image sensors (cameras) recited in the independent claims 1 and 4 disclosed in the specification, nothing in the claims 1 and 4 elements preclude the processing from being performed as mental process, or merely based on the observations, evaluation, judgement, thought process and mathematical problem using paper/pencil based on the human intelligence. The generic artificial intelligence technology and deep learning components recited in independent claims 1 and 4 is a mere idea of a solution without details per MPEP 2106.05( f ) or the idea of a technological environment without detail per MPEP 2106.05 ( h ). The generic computing hardware, generic cameras and the artificial intelligence components are recited as just to automate the mental process (Step 2A, prong 1 Test Abstract idea = Yes).
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
[YES/NO].
The claims do not recite additional elements that integrate the judicial exception into a practical application.
With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application:
an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field;
an additional element that applies or uses a judicial exception to affect a particular treatment or prophylaxis for a disease or medical condition;
an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim;
an additional element effects a transformation or reduction of a particular article to a different state or thing; and
an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.
While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application:
an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea;
an additional element adds insignificant extra-solution activity to the judicial exception; and
an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use.
Claim(s) 1and 4 do not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application. Claim(s) 1 and 4 recite(s) the further limitations of: “DETECTING THE LOCATION AND SPEED OF A RAILWAY VEHICLE which, applicable as an additional system to the systems currently existing in railway vehicles, is characterized by comprising, at least, the following steps: - capturing of one imagen of the track (6) and the passive elements (5) existing thereon, by means of one or more video cameras (1) installed in the front of the train (2) (collecting video/image data and visualizing video/image data in front of the train, i.e. data collection is insignificant extra solution activity and mental process of visualizing the collected data); - processing of the information of the optical signal (1a) captured by the camera or cameras 10 (1) by means of optical processors (3a, 3b, 3c), based on artificial intelligence technologies and deep neural networks, capable of determining the speed and location of the train (2) (mentally processing the collected video/image data using human intelligence to evaluate and estimate using paper pencil and solving mathematical relationship to determine the location and speed of the train); - sending the processed location and speed information (i) to at least one control center (7): - representation on a screen (4) of the processed location and speed information (i) of the train (2) (providing the determined location and speed to control center which is mental process and insignificant post solution activity”.
”.
These limitations are recited at a high level of generality (i.e. as a general action or calculation being taken based on the results of the acquiring step) and amounts to mere post solution actions, which is a form of insignificant extra-solution activity without further detail. Furthermore, the claims are claimed generically and are operating in their ordinary capacity such that they do not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
Other than generic and well-known computer hardware i.e., processors and generic image sensors (cameras) recited in the independent claims 1 and 4 disclosed in the specification, nothing in the claims 1 and 4 elements preclude the processing from being performed as mental process, or merely based on the observations, evaluation, judgement, thought process and mathematical problem using paper/pencil based on the human intelligence. The generic artificial intelligence technology and deep learning components recited in independent claims 1 and 4 is a mere idea of a solution without details per MPEP 2106.05( f ) or the idea of a technological environment without detail per MPEP 2106.05 ( h ). The generic computing hardware, generic cameras and the artificial intelligence components are recited as just to automate the mental process (Step 2A, prong 2 Test Abstract idea = Yes).
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
NO.
The claims do not recite additional elements that amount to significantly more than the judicial exception.
With regard to STEP 2B, whether the claims recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre-guideline procedure is still in effect. Specifically, that examiners should continue to consider whether an additional element or combination of elements:
adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or
simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present.
As stated above Other than generic and well-known computer hardware i.e., processors and generic image sensors (cameras) recited in the independent claims 1 and 4 disclosed in the specification, nothing in the claims 1 and 4 elements preclude the processing from being performed as mental process, or merely based on the observations, evaluation, judgement, thought process and mathematical problem using paper/pencil based on the human intelligence. The generic artificial intelligence technology and deep learning components recited in independent claims 1 and 4 is a mere idea of a solution without details per MPEP 2106.05( f ) or the idea of a technological environment without detail per MPEP 2106.05 ( h ). The generic computing hardware, generic cameras and the artificial intelligence components are recited as just to automate the mental process.
Thus, since Claim(s) 1 and 4 are: (a) directed toward an abstract idea, (b) do not recite additional elements that integrate the judicial exception into a practical application, and (c) do not recite additional elements that amount to significantly more than the judicial exception, it is clear that Claim(s) 1 and 4 are not eligible subject matter under 35 U.S.C 101 (Step 2B, Test Abstract idea = Yes).
Regarding dependent claims 2-3 and 5-6 : the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. Claims 2-3 and 5-6 further limit the abstract idea of independent claims 1 and 4. The limitation(s) of these dependent claims fall under (mental process including observation and evaluation, and judgement which can be done mentally in the human mind) OR (insignificant pre/post-solution extra activity of generating/gathering data, performing mathematical calculation) OR (generic computers or components configured to perform the process) and the generic artificial intelligence/deep learning components recited in the claims 2-3 and 5-6 are mere idea of a solution without details per MPEP 2106.05( f ) or the idea of a technological environment without detail per MPEP 2106.05 ( h ).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6 are rejected under 35 USC 103 as being unpatentable over of Garcia et al., (Application of Computer vision and Deep Learning in the railways domain for autonomous train stop operation 2020 IEEE 978-1-7281-6667-4/20, Proceedings of the IEEE/SICE International Symposium on System Integration, pages 943-948, USPTO-892) in view of Park et al. (US 20200026960)
Regarding claims 1 and 4 Garcia disclose METHOD/Equipment for detecting the location and speed of a railway vehicle which, applicable as an additional system to the systems currently existing in railway vehicles, is characterized by comprising, at least, the following steps (Garcia Figs. 1-2, TABLE I, Abstract, paragraph I, page 943, INRODUCTION, pages 943-944, paragraph II PROBLEM DEFINATION and page 944 III RELATED WORK, right-column, lines 10-20 Garcia states Visual Odometry (VO). Odometry can be defined as the use of data from motion sensors in order to estimate changes in position over time [location/speed]. Visual odometry (VO) is a particular case of odometry, where the position information is acquired through camera images. The term Visual Odometry was first introduced by Niester et al. [21] proposing a method for estimating camera motion using RANSAC [22] outlier refinement method and tracking extracted features across all frames. Before that, feature matching was done just in consecutive frames. Later researches have shown that VO methods perform significantly better than wheel odometry in robotics while the cost of cameras is much lower compared to more accurate IMUs and LASER scanners [8]. This scenario raises the need for exploration of the applicability of VO in the railway domain and autonomous driving trains and Garcia also disclose SLAM i.e., simultaneous localization and mapping i.e., location/speed):
- capturing of one or more images of the track and the passive elements (5) existing thereon, by means of one or more video cameras installed in the front of the train (Garcia Figs.1-2, Table paragraph II PROBLEM DEFINATION, page 944, left column third paragraph disclose the sensors used in visual localization systems include monocular, stereo, RGB-D cameras, and LIDAR, page 945 left column paragraph IV. USE CASE: TRAIN STOP OPERATION The use case scenario is the Autonomous Urban Train where artificial intelligence and high-performance computational capabilities are used to increase the dependability and the safety of the system. The objective is to apply Computer Vision and Deep Learning techniques to improve different autonomous train operation functionalities as precision stop, rolling stock coupling operation or person and obstacle detection-identification in railroads and in second paragraph Garcia disclose (Fig. 1) detection-identification in railroads. The selected use case is the automatic accurate stop at door equipped platforms aligning the vehicle and platform doors. The goal is to perform precise localization inside the platform area using visual patterns detection, identification and tracking in order to reach an accurate stopping point and managing automatic train operation (traction and brake commands, ATO functionality). A contribution is expected to the automatic train operation system, adding the visual localization estimation information (location/speed detection ) to the usual trains odometry (location/speed) data calculations based on radars and encoders. This obviously corresponds to capturing of one or more images of the track and the passive elements existing thereon, by means of one or more video cameras installed in the front of the train);
- processing of the information of the optical signal captured by the camera or cameras by means of optical processors based on artificial intelligence technologies and deep neural networks, capable of determining the speed and location of the train (Garcia paragraph page 944 III RELATED WORK, right-column, lines 10-20 Garcia states Visual Odometry (VO). Odometry can be defined as the use of data from motion sensors in order to estimate changes in position over time [location/speed]. Visual odometry (VO) is a particular case of odometry, where the position information is acquired through camera images and Garcia Figs.1-2, Table paragraph II PROBLEM DEFINATION, page 944, left column third paragraph disclose the sensors used in visual localization systems include monocular, stereo, RGB-D cameras, page 945 left column paragraph IV. USE CASE: TRAIN STOP OPERATION The use case scenario is the Autonomous Urban Train where artificial intelligence and high-performance computational capabilities are used to increase the dependability and the safety of the system. The objective is to apply Computer Vision and Deep Learning techniques to improve different autonomous train operation functionalities as precision stop, rolling stock coupling operation or person and obstacle detection-identification in railroads and in second paragraph Garcia disclose ( Fig. 1 ) detection-identification in railroads. The selected use case is the automatic accurate stop at door equipped platforms aligning the vehicle and platform doors. The goal is to perform precise localization inside the platform area using visual patterns detection, identification and tracking in order to reach an accurate stopping point and managing automatic train operation (traction and brake commands, ATO functionality). A contribution is expected to the automatic train operation system, adding the visual localization estimation information to the usual trains odometry (location/speed) data calculations based on radars and encoders. All this obviously corresponds to - processing of the information of the optical signal (1a) captured by the camera or cameras by means of optical processors based on artificial intelligence technologies and deep neural networks, capable of determining the speed and location of the train);
- sending the processed location and speed information (i) to at least one control center representation on a screen of the processed location and speed information of the train (Garcia paragraph II Problem Definition , page 943 lines 1-10 Communication-Based Train Control (CBTC) is a standard defined by the IEEE (IEEE 1474 [5]) which defines a set of performance and functional requirements for track and onboard equipment in order to enhance performance, availability, operations and the protection of the involved systems. A CBTC system could be defined as an automatic train control system where the track and onboard subsystems are continuously communicate and page 944, right column lines 1-10 Garcia disclose In GOA3 and GOA4 systems, as there is not a driver inside the train, an accurate train location system is required. Precise positioning systems can reach a higher grade of automation [7]. A train that implements GOA3 or GOA4 level can be considered as a robot that navigates through a track in indoor and outdoor environments including underground stations. Therefore, it becomes essential to implement precise and reliable train localization subsystems. A GOA3 or GOA4 train must compute, among others, the braking curve or the train stopping location with precision, Garcia Figs. 1-2, page 944 III RELATED WORK, right-column, lines 10-20 Garcia states Visual Odometry (VO). Odometry can be defined as the use of data from motion sensors in order to estimate changes in position over time [location/speed] and page 945 left column paragraph IV. USE CASE: TRAIN STOP OPERATION The use case scenario is the Autonomous Urban Train where artificial intelligence and high-performance computational capabilities are used to increase the dependability and the safety of the system. The objective is to apply Computer Vision and Deep Learning techniques to improve different autonomous train operation functionalities as precision stop, rolling stock coupling operation or person and obstacle detection-identification in railroads and in second paragraph Garcia disclose ( Fig. 1 ) detection-identification in railroads. The selected use case is the automatic accurate stop at door equipped platforms aligning the vehicle and platform doors. The goal is to perform precise localization inside the platform area using visual patterns detection, identification and tracking in order to reach an accurate stopping point and managing automatic train operation (traction and brake commands, ATO functionality). A contribution is expected to the automatic train operation system, adding the visual localization estimation information to the usual trains odometry (location/speed) data. In the system of Garcia it would be obvious to send visual odometry data of the train i.e. location/speed of the train obtain by visual odometry and send the data of visual odometry [location/speed] to computer screen of Communication Based Train Control center (CBTC) which is defined by IEEE standard 1474 as disclosed pages 943-944 paragraph II PROBLEM DEFINITION).
Furthermore regarding claim 4 Garcia disclose one or more high resolution and high capture rate video cameras incorporated into the train (Garcia Figure 1-2 show train and image capturing and Table 1 disclose list of one or more cameras and other hardware and 3D reconstruction using the cameras and page 945 paragraph IV. USE CASE: TRAIN STOP OPERATION, disclose in second paragraph disclose visual localization and visual odometry therefore it would be obvious in the system of Garcia to use one or more high resolution and high capture rate video cameras incorporated into the train because visual localization and odometry would require such cameras.
Furthermore in the field of endeavor of remote and autonomous vehicle driving Park disclose determining the location/speed of the vehicle (Park paragraph 0005 disclose Embodiments of the present disclosure relate to regression-based line detection for autonomous driving machines. Systems and methods are disclosed that preserve rich spatial information through a deep-learning model by providing compressed information at a down-sized spatial resolution or dimension as compared to a spatial resolution or dimension of an input image. As such, embodiments of the present disclosure relate to line detection for autonomous driving machines including, but not limited to, lane lines, road boundaries, text on roads, or signage (e.g., poles, street signs, etc., paragraph 0035 and 0050 disclose image cameras/ processor and artificial intelligence for autonomous and Park in paragraph 00138 disclose IMU [inertial measurement], LIDAR and RADAR data which obviously include location/speed data and Park paragraph 0098, disclose The controller(s) 1036 may provide the signals for controlling one or more components and/or systems of the vehicle 1000 in response to sensor data received from one or more sensors (e.g., sensor inputs). The sensor data may be received from, for example and without limitation, global navigation satellite systems sensor(s) 1058 (e.g., Global Positioning System sensor(s)), RADAR sensor(s) 1060, ultrasonic sensor(s) 1062, LIDAR sensor(s) 1064, inertial measurement unit (IMU) sensor(s) 1066 (e.g., accelerometer(s), gyroscope(s), magnetic compass(es), magnetometer(s), etc.), microphone(s) 1096, stereo camera(s) 1068, wide-view camera(s) 1070 (e.g., fisheye cameras), infrared camera(s) 1072, surround camera(s) 1074 (e.g., 360 degree cameras), long-range and/or mid-range camera(s) 1098,speed sensor(s) 1044 (e.g., for measuring the speed of the vehicle 1000. This obviously corresponds to determining the location/speed of the vehicle ) and
sending the processed location and speed of the vehicle to control center display (Park paragraph 0098 note: location/speed of the vehicle and paragraph 0196 disclose the server(s) 1078 may receive data from the vehicles and apply the data to up-to-date real-time neural networks for real-time intelligent inferencing. The server(s) 1078 may include deep-learning supercomputers and/or dedicated AI computers powered by GPU(s) 1084, such as a DGX and DGX Station machines developed by NVIDIA. However, in some examples, the server(s) 1078 may include deep learning infrastructure that use only CPU-powered datacenters and Park paragraph 0099 disclose display. Therefore in the system of Park it is obvious to sending the processed location and speed of the vehicle to control center display).
Therefore it would have been obvious to one of ordinary skill in the art, before the claimed invention was filed to capture one or more images of the track (6) and the passive elements by means of one or more video cameras installed in the front of the train, process the information of the optical signal captured by the camera or cameras by means of optical processors based on artificial intelligence technologies and deep neural networks, capable of determining the speed and location of the train and send the processed location and speed information to at least one control center representation on a screen of the processed location and speed information (i) of the train as shown by in combination of Garcia and Park because such a system provide autonomous driving operation of railway-vehicle/vehicle.
Regarding claims 2 and 5 Garcia disclose the optical signal from the camera or cameras is processed by a signal detector which detects passive signals or objects (5) of the track , a character reader which interprets the characters and signs of the signals of the track and an optical flow meter (Garcia paragraph page 944 III RELATED WORK, right-column, lines 10-20 Garcia states Visual Odometry (VO). Odometry can be defined as the use of data from motion sensors in order to estimate changes in position over time [location/speed]. Visual odometry (VO) is a particular case of odometry, where the position information is acquired through camera images and Garcia Figs.1-2, Table paragraph II PROBLEM DEFINATION, page 944, left column third paragraph disclose the sensors used in visual localization systems include monocular, stereo, RGB-D cameras, page 945 left column paragraph IV. USE CASE: TRAIN STOP OPERATION The use case scenario is the Autonomous Urban Train where artificial intelligence and high-performance computational capabilities are used to increase the dependability and the safety of the system. The objective is to apply Computer Vision and Deep Learning techniques to improve different autonomous train operation functionalities as precision stop, rolling stock coupling operation or person and obstacle detection-identification in railroads and in second paragraph Garcia disclose ( Fig. 1 ) detection-identification in railroads. The selected use case is the automatic accurate stop at door equipped platforms aligning the vehicle and platform doors. The goal is to perform precise localization inside the platform area using visual patterns detection, identification and tracking in order to reach an accurate stopping point and managing automatic train operation (traction and brake commands, ATO functionality). A contribution is expected to the automatic train operation system, adding the visual localization estimation information to the usual trains odometry (location/speed) data calculations based on radars and encoders. In the system of Garcia cameras and visual odometry detection corresponds to optical signal detection and visual odometry in the system of Garcia corresponds to optical flow meter because it provides location/speed through camera images/video and in the system of Garcia it would be use cameras to read characters such as text).
Furthermore Park discloses cameras and character reader in paragraphs 0099 and 00153-00154 text and sign reader paragraphs 00153-00154 and paragraph 0028 disclose optical flow meter in paragraph.
Regarding claims 3 and 6 Garcia disclose step of sending the location and speed information processed by the optical processors to communication systems of the train to be transmitted to other traffic control center (Garcia disclose determining location speed of train page 945, paragraph IV visual odometry data i.e., the location/speed of the train, Garcia paragraph II Problem Definition , page 943 lines 1-10 Communication-Based Train Control (CBTC) is a standard defined by the IEEE (IEEE 1474 [5]) which defines a set of performance and functional requirements for track and onboard equipment in order to enhance performance, availability, operations and the protection of the involved systems. A CBTC system could be defined as an automatic train control system where the track and onboard subsystems are continuously communicate and page 944, right column lines 1-10 Garcia disclose In GOA3 and GOA4 systems, as there is not a driver inside the train, an accurate train location system is required. Precise positioning systems can reach a higher grade of automation [7]. A train that implements GOA3 or GOA4 level can be considered as a robot that navigates through a track in indoor and outdoor environments including underground stations. Therefore, it becomes essential to implement precise and reliable train localization subsystems. A GOA3 or GOA4 train must compute, among others, the braking curve or the train stopping location with precision, Garcia Figs. 1-2, page 944 III RELATED WORK, right-column, lines 10-20 Garcia states Visual Odometry (VO). Odometry can be defined as the use of data from motion sensors in order to estimate changes in position over time [location/speed] and page 945 left column paragraph IV. USE CASE: TRAIN STOP OPERATION The use case scenario is the Autonomous Urban Train where artificial intelligence and high-performance computational capabilities are used to increase the dependability and the safety of the system. The objective is to apply Computer Vision and Deep Learning techniques to improve different autonomous train operation functionalities as precision stop, rolling stock coupling operation or person and obstacle detection-identification in railroads and in second paragraph Garcia disclose ( Fig. 1 ) detection-identification in railroads. The selected use case is the automatic accurate stop at door equipped platforms aligning the vehicle and platform doors. The goal is to perform precise localization inside the platform area using visual patterns detection, identification and tracking in order to reach an accurate stopping point and managing automatic train operation (traction and brake commands, ATO functionality). A contribution is expected to the automatic train operation system, adding the visual localization estimation information to the usual trains odometry (location/speed) data. In the system of Garcia it is obvious to communicate based on IEEE standard location/speed of the train to control center and it would be obvious to further transmit information to other control centers from the first control center based on IEEE communication standard for the sake of safety of autonomous train network).
‘
.
Communication
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ISHRAT I SHERALI whose telephone number is (571)272-7398. The examiner can normally be reached Monday-Friday 8:00AM -5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at 571-272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
ISHRAT I. SHERALI
Examiner
Art Unit 2667
/ISHRAT I SHERALI/Primary Examiner, Art Unit 2667