DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 05/23/2025 has been entered.
Specification
Applicant’s amendments to the specification and drawing filed on 05/23/2025 and 07/17/2023 have considered. The amendments introduce no new matter. Support for amendments to the drawing are found in paragraph [0164] and specification’s amendments are minor amendments to address typographical errors. The amendments are entered.
Status of Claims
The following is a Non-Final Office Action in response to applicant’s request for continued examination (RCE) received on 05/23/2025.
Claims 1, 17, 33, and 37 are amended. Claims 10 and 26 are cancelled. Claims 41-43 are newly added. Claims 1-9, 11-25, and 27-43 are considered in this Office Action. Claims 1-9, 11-25, and 27-43 are currently pending.
Response to Arguments
Applicant’s amendments necessitated new grounds of rejections set forth in this Office Action.
Applicant’s amendments and arguments with respect to 35 U.S.C. 101, however the arguments are not persuasive. An Updated 35 U.S.C. 101 will address applicant’s amendments.
Applicant’s amendments and arguments have been considered however applicant’s arguments are primarily raised in light of applicant’s amendments and updated 35 U.S.C. 103 rejection will address applicant’s amendments.
Examiner Notes
With respect to method claim 41, the claim recites the following limitation “a score attached to each selected rule, the analyzing step establishing a score of the processed data” which is not a positively recited method step. The phrase describes a result or condition without identifying an act performed. Because this step is merely descriptive, this limitation is not given patentable weight. See MPEP 2173.05(q).
With respect to method claim 42, the claim recites the following limitations: the operator being at a set of controls of the type of the mobile asset normally operated on at least one of a specified railroad and a segment of the specified railroad and which the operator would be permitted by the specified railroad to operate in a normal course of events after certification” and “a score attached to each selected rule, the analyzing step establishing a score of the processed data”, which are not a positively recited method step. The phrases describe a result or condition without identifying an act performed. Because these steps are merely descriptive, the limitations are not given patentable weight. See MPEP 2173.05(q).
With respect to method claim 43, the claim recites the following limitation “the operator being at a set of controls of the type of the mobile asset normally operated on at least one of a specified railroad and a segment of the specified railroad and which the operator would be permitted by the specified railroad to operate in a normal course of events after certification;” which is not a positively recited method step. The phrase describes a result or condition without identifying an act performed. Because this step is merely descriptive, this limitation is not given patentable weight. See MPEP 2173.05(q).
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 41 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 41 recite “the PTC” and “the DP”, it is unclear what PTC and DP stand for, which render the claims indefinite. Examiner notes that the first instant of abbreviation should be accompanied with the full phrase to establish an acceptable meaning and antecedent basis.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-9, 11-25, and 27-43 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-patentable subject matter. The claims are directed to an abstract idea without significantly more.
Claims 1-9, 11-25, and 27-43 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The eligibility analysis in support of these findings is provided below, in accordance with the “Patent Subject Matter Eligibility Guidance” (as explained in MPEP 2106).
With respect to Step 1 of the eligibility inquiry (as explained in MPEP 2106), it is first noted that the method (claims 1-9 and 11-16), the system (claim 17-25 and 27-32), the method (claims 33-36), the system (claims 37-40), the method (claim 41), the method (claim 42), and the method (claim 43) are directed to an eligible category of subject matter (i.e., process, machine, and article of manufacture respectively). Thus, Step 1 is satisfied.
With respect to Step 2, and in particular Step 2A, it is next noted that the claims recite an abstract idea of assessing the performance skills of an operator of a mobile asset by reciting concepts performed in the human mind (including an observation, evaluation, judgment, opinion), which falls into the “mental process” group within the enumerated groupings of abstract ideas, wherein the courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. (See MPEP 2106.04(a)(2)). The claims further fall into “Certain methods of organizing human activity”, particularly managing personal behavior (including social activities, teaching, and following rules or instructions). The limitations reciting the abstract idea are highlighted in italics and the limitation directed to additional elements highlighted in bold, as set forth in exemplary claim 17, are: A system for automating the assessment of safety performance skills of a specified operator of a mobile asset, comprising: a web portal adapted to receive a request from a user, the request comprising identification of the specified operator of the mobile asset and a specified time range; a data acquisition and recording system onboard the mobile asset comprising at least one data recorder, the data acquisition and recording system adapted to receive a set of first data related to the mobile asset, and a set of second data related to the specified operator and the specified time range, the set of second data comprising a subset of the set of first data, the set of first data based on at least one data signal from at least one of: at least one data source onboard the mobile asset, the at least one data source comprising at least one of at least one camera and the at least one data recorder of the data acquisition and recording system; and at least one data source remote from the mobile asset; a video analytics system comprising an artificial intelligence component, the video analytics system adapted to process the set of first data and the set of second data into processed data, compare the processed data to a rule set directed to safe operation of mobile assets, and analyze the performance of the specified operator, the rule set part of a score system derived from the safe operation of mobile assets, implementation of the score system resulting in a score, the score equating to one of certifying the specified operator and decertifying the specified operator; and the web portal adapted to display at least one of the processed data and the score on at least one of the web portal and a display device, the displayed processed data comprising at least one video, and the displayed processed data adapted to be at least one of viewed by the user and compared to rules directed to the safe operation of mobile assets. Claims 1, 33, and 37 substantially recite the same limitation as claim 17 and therefore subject to the same rationale.
The limitations reciting the abstract idea are highlighted in italics and the limitation directed to additional elements highlighted in bold, as set forth in exemplary claim 42, are: A method for automating the assessment of safety performance skills of a specified operator of a mobile asset comprising the steps of: processing, using an artificial intelligence component of a video analytics system, at least a set of first data and a set of second data into processed data, the set of first data gathered from the mobile asset and the set of second data gathered from a source remote from the mobile asset; analyzing, using the video analytics system, the processed data in a score system of the video analytics system, the score system comprising a set of rules directed to safe operation of the mobile asset, each rule or a combination of the rules of the set of rules selected from the group consisting of the mobile asset operator performance of a class III brake test, the mobile asset operator efficiently starts movement of the mobile asset, the mobile asset operator strips the throttle, the mobile asset operator failed to wait 10 seconds before transition to dynamic brakes, the mobile asset operator independently set up and used the brake properly, the mobile asset operator performed proper running release, the mobile asset operator performed a proper combination braking procedure, the mobile asset operator made proper use of the horn, bell, headlights, and telemetry, the mobile asset operator did not fail to initialize the trip optimizer during the trip, the PTC was properly initialized and monitored by the mobile asset operator, and the DP was properly set up and brake tested by the mobile asset operator; a score attached to each selected rule, the analyzing step establishing a score of the processed data; displaying, using a display device of the web portal, at least one of the processed data and the score, the score corresponding to one of a certification and decertification of the mobile asset operator. Claims 42-43 substantially recite the same limitation as claim 42 and therefore subject to the same rationale.
With respect to Step 2A Prong Two, the judicial exception is not integrated into a practical application. The additional elements are directed to system, web portal, a data acquisition and recording system onboard the mobile asset, at least one signal from at least one of at least one data source onboard the mobile asset comprising at least one of at least one camera and at least one data recorder of the data acquisition and recording system and at least one data source remote from the mobile asset (means to collect data)and an artificial intelligence component of a video analytics system (recited at high level of generality), the web portal adapted to display at least one of the web portal and a display device, the displayed processed data comprising at least one video, and the displayed processed data adapted to display (amounts to displaying results)to implement abstract idea. However, these elements fail to integrate the abstract idea into a practical application because they fail to provide an improvement to the functioning of a computer or to any other technology or technical field, fail to apply the exception with a particular machine, fail to effect a transformation of a particular article to a different state or thing, and fail to apply/use the abstract idea in a meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. Furthermore, these elements have been fully considered, however they are directed to the use of generic computing elements (Applicant’s Specification [0184] describes high level general purpose computer) to perform the abstract idea, which is not sufficient to amount to a practical application and is tantamount to simply saying “apply it” using a general purpose computer, which merely serves to tie the abstract idea to a particular technological environment (computer based operating environment) by using the computer as a tool to perform the abstract idea, which is not sufficient to amount to particular application. While receiving, using a data recorder of a data acquisition and recording system onboard the mobile asset, a set of first data based on at least one data signal from at least one of: at least one data source onboard the mobile asset, the at least one data source onboard the mobile asset comprising at least one of at least one camera and at least one data recorder of the data acquisition and recording system; and at least one data source remote from the mobile asset is considered part of the abstract idea, if considered under Prong II as additional elements, it would amount to pre-solution activity, because it is generic step which amounts to data gathering step, wherein “at least one data source onboard a mobile asset, the at least one data source onboard the mobile asset comprising at least one of at least one camera and at least one data recorder of the data acquisition and recording system” are recited at high level of generality and amounts to data gathering means. The examiner notes that the “a video analytics system comprising an artificial intelligence component” recited in claims has been considered. The claims do not impose any limits on how the video analytics system comprising an artificial intelligence component that is caused to perform processing least the set of first data and the set of second data into processed data. The claims also do not impose any limits on how the analysis is accomplished, and thus it can be performed in any way known to those of ordinary skill in the art. The video analytics system comprising an artificial intelligence component is recited at high level of generality which is not sufficient to amount to a practical application and is tantamount to simply saying “apply it” using a general-purpose computer, which merely serves to tie the abstract idea to a particular technological.
Accordingly, because the Step 2A Prong One and Prong Two analysis resulted in the conclusion that the claims are directed to an abstract idea, additional analysis under Step 2B of the eligibility inquiry must be conducted in order to determine whether any claim element or combination of elements amount to significantly more than the judicial exception.
With respect to Step 2B of the eligibility inquiry, it has been determined that the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional limitations are directed to: system, web portal, a data acquisition and recording system onboard the mobile asset, at least one signal from at least one of at least one data source onboard the mobile asset comprising at least one of at least one camera and at least one data recorder of the data acquisition and recording system and at least one data source remote from the mobile asset (means to collect data)and an artificial intelligence component of a video analytics system (recited at high level of generality), the web portal adapted to display at least one of the web portal and a display device, the displayed processed data comprising at least one video, and the displayed processed data adapted to display (amounts to displaying results)to implement abstract idea. These elements have been considered, but merely serve to tie the invention to a particular operating environment (i.e., computer-based implementation), though at a very high level of generality and without imposing meaningful limitation on the scope of the claim. In addition, Applicant’s Specification ([0184]) describes generic off-the-shelf computer-based elements for implementing the claimed invention, and which does not amount to significantly more than the abstract idea, which is not enough to transform an abstract idea into eligible subject matter. Such generic, high-level, and nominal involvement of a computer or computer-based elements for carrying out the invention merely serves to tie the abstract idea to a particular technological environment, which is not enough to render the claims patent-eligible, as noted at pg. 74624 of Federal Register/Vol. 79, No. 241, citing Alice, which in turn cites Mayo.
The examiner notes the claims do not impose any limits on how receiving, using a data recorder of a data acquisition and recording system onboard the mobile asset, a set of data based on at least one data signal from at least one of: at least one data source onboard the mobile asset, the at least one data source onboard the mobile asset comprising at least one camera and at least one data recorder of the data acquisition and recording system; and at least one data source remote the mobile asset.. The claims also do not impose any limits on how the analysis is accomplished, and thus it can be performed in any way known to those of ordinary skill in the art. Additionally, with respect to the Berkheimer court case, below can be found evidence provided by the Examiner that provides, based on 2B analysis, how the claims are viewed as well-understood, routine, and conventional activity for consistency with the Federal Circuit’s decision in Berkheimer and MPEP 2106.5(d). This is supported by the fact that the disclosure does not provide the details necessary to provide significantly more than the abstract idea performed on a general-purpose computer and therefore not significantly more. Prior art references teach the limitations of receiving data, and may or may not include other components 172 such as an image/video/sound capture device such as a camera, voice recording microphone, stylus, etc. is a known technique. Thus, the use of camera to gather data, as recognized in art, which predate Applicant’s invention. As disclosed in Salameh et al. (US Pub. No 2015/0149321 A1) “0070] As stated above, FIG. 1B is an exemplary illustration of well-known, conventional computing machine as client computing machines or devices that may be used to implement and access one or more embodiments of the network-based social-marketplace platform 102 of the present invention. As illustrated, the client computing device 108 (hereinafter simply referred to as client device 108) may be any well-known conventional computing machine, non-limiting examples of which may include netbooks, notebooks, laptops, smart tablets, mobile devices such as feature or smart mobile phones, or any other devices that are Network and or Internet enabled. The client device 108 includes the typical, conventional components such as an I/O module 160 (e.g., a keyboard or touch screen display, etc.), a storage module 162 for storing information (may use Cloud Computing Systems and services), a memory 164 used by a processor 166 to execute programs, a communication module 168 for implementing desired communication protocol, a communications interface (e.g., transceiver module) 170 for transmitting and receiving data, and may or may not include other components 172 such as an image/video/sound capture device such as a camera, voice recording microphone, stylus, etc.” . Therefore, as shown by cited prior art references, the 2B features of the invention are “routine and conventional.” It is when the claims are wholly directed to the abstract idea without anything significantly more in the claims that the claims are deemed to preempt or monopolize the exception (i.e. the abstract idea).
In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements integrates the abstract idea into a practical application. Their collective functions merely provide conventional computer implementation. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that the ordered combination amounts to significantly more than the abstract idea itself.
The dependent claims have been fully considered as well (for example, claims 2/18 the at least one camera comprising at least one of at least one 360 degrees camera located in at least one of in the mobile asset, on the mobile asset, and in the vicinity of the mobile asset, at least one fixed camera located in at least one of in the mobile asset, on the mobile asset, and in the vicinity of the mobile asset, and at least one microphone located in at least one of in the mobile asset, on the mobile asset, and in the vicinity of the mobile asset, wherein the at least one 360 degrees cameraelements (Applicant’s Specification [0184] describes high level general purpose computer) to perform the abstract idea, which is not sufficient to amount to a practical application and is tantamount to simply saying “apply it” using a general purpose computer, which merely serves to tie the abstract idea to a particular technological environment (computer based operating environment) by using the computer as a tool to perform the abstract idea, which is not sufficient to amount to particular application. These elements have been considered, but merely serve to tie the invention to a particular operating environment (i.e., computer-based implementation), though at a very high level of generality and without imposing meaningful limitation on the scope of the claim. In addition, Applicant’s Specification ([0184]) describes generic off-the-shelf computer-based elements for implementing the claimed invention, and which does not amount to significantly more than the abstract idea, which is not enough to transform an abstract idea into eligible subject matter. Such generic, high-level, and nominal involvement of a computer or computer-based elements for carrying out the invention merely serves to tie the abstract idea to a particular technological environment, which is not enough to render the claims patent-eligible, as noted at pg. 74624 of Federal Register/Vol. 79, No. 241, citing Alice, which in turn cites Mayo), however, similar to the finding for claims above, these claims are similarly directed to the abstract idea of concepts of mental process, without integrating it into a practical application and with, at most, a general-purpose computer that serves to tie the idea to a particular technological environment, which does not add significantly more to the claims.
The ordered combination of elements in the dependent claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Accordingly, the subject matter encompassed by the dependent claims fails to amount to significantly more than the abstract idea.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 3, 4, 8, 12, 13, 15, 16, and 17, 19, 20, 24, 28, 29, 31, and 32 are rejected under 35 U.S.C. 103 as being unpatentable over Kenji Fujii (US 2020/0364800 A1, hereinafter “Fujii”) in view of Aldo DiSorbo (US 2016/0117638 A1, hereinafter “DiSorbo”) in view of Nicholas E. Roddy (US 2011/0208567 A9, hereinafter “Roddy”) in view of Ronald Ziegler (US 2010/0039247 A1, hereinafter “Ziegler”).
Claim 1/13
Fujii teaches:
A method for automating the assessment of safety performance skills of a specified operator of a mobile asset, comprising the steps of: receiving, using a data recorder of a data acquisition and recording system onboard the mobile asset, a set of first data based on at least one data signal from at least one of: at least one data source onboard the mobile asset, the at least one data source onboard the mobile asset comprising at least one camera and at least one data recorder of the data acquisition and recording system; and at least one data source remote the mobile asset([0048] The vehicle module 104 gathers driver data using the various sensors 114 provided in the vehicle (e.g., speed sensors, accelerometers, GPS locators, tire pressure sensors, self-driving sensors, and Audio/Visual sensors, such as backup cameras, and anti-theft theft devices) that are typically connected to the ECU via a Controller Area Network (CAN) bus for example. From the vehicle sensor data and/or video meta data gathered, the processor 110 computes the gathered data into scores using an artificial intelligence and/or machine learning module 124 based on insurance machine learning algorithms and an extensive data collection and analysis previously gathered to calculate a driving score that includes risk and safety for a particular trip. [0059] According to one example, the system may retain up to 60 seconds of video data at a time using the GPU. When the system detects a risky event, the GPU may save 10 seconds of video before and after the risky event on the on-board storage of the vehicle module. e system may update the remote data storage or server with trip data at regular intervals);
processing, using an artificial intelligence component of a video analytics system, at least the set of first data and the set of second data into processed data([0051] The machine learning module 124 is comprised of at least one insurance machine learning algorithm to analyze the data from the sensors 114, diagnostics module 116, engine control unit 118 and self-driving module 121 to generate driver scores and trip information. [0095] Based on all the data gathered from the trip, the system computes a trip-based driver scoring for a cumulative overall driver score 716. Next, the system computes the collected vehicle sensor and/or video meta data using artificial intelligence and/or machine learning based on data collection and analysis to develop driving scores including scores for risk and safety 718. If no more risky events are detected and it is determined that the trip has ended 720, the system stops collecting data);
analyzing, using at least one of the video analytics system and the user, the performance of the specified operator by comparing the processed data to a score system([0095] Based on all the data gathered from the trip, the system computes a trip-based driver scoring for a cumulative overall driver score 716. Next, the system computes the collected vehicle sensor and/or video meta data using artificial intelligence and/or machine learning based on data collection and analysis to develop driving scores including scores for risk and safety 718. If no more risky events are detected and it is determined that the trip has ended 720, the system stops collecting data), the score system derived from a rule set directed to safe operation of mobile assets ([0092] FIG. 7 is a flow diagram illustrating an exemplary method 700 for calculating a driver score of a driver of a vehicle. As defined above, the driver may be a person or the vehicle itself if the self-driving feature is engaged. [0093] For example, if the driver suddenly brakes, the vehicle module may check the corresponding sensor data over a 10 second time frame to determine the time of the greatest deceleration. When the vehicle module determines that a risky event has occurred, a severity level between 0 and 3, (or between 0 and 4) for example, is assigned and the trip summary is updated. The system may automatically assume a trip severity of 4 for every minute of the trip where the trip score is a number from 0-100, after taking into consideration the trip's configured typical severity (or predetermined severity or threshold) and the actual severity per minute. Table 1 further illustrates scores associated with safe operation of vehicle. Further, see [0095]-[0097]),
While Fujii teaches in [0095] Based on all the data gathered from the trip, the system computes a trip-based driver scoring for a cumulative overall driver score 716. Next, the system computes the collected vehicle sensor and/or video meta data using artificial intelligence and/or machine learning based on data collection and analysis to develop driving scores including scores for risk and safety 718. If no more risky events are detected and it is determined that the trip has ended 720, the system stops collecting data [0092] FIG. 7 is a flow diagram illustrating an exemplary method 700 for calculating a driver score of a driver of a vehicle. As defined above, the driver may be a person or the vehicle itself if the self-driving feature is engaged. [0093]-[0097] For example, if the driver suddenly brakes, the vehicle module may check the corresponding sensor data over a 10 second time frame to determine the time of the greatest deceleration. When the vehicle module determines that a risky event has occurred, a severity level between 0 and 3, (or between 0 and 4) for example, is assigned and the trip summary is updated. The system may automatically assume a trip severity of 4 for every minute of the trip where the trip score is a number from 0-100, after taking into consideration the trip's configured typical severity (or predetermined severity or threshold) and the actual severity per minute. Table 1 further illustrates scores associated with safe operation of vehicle. Fujii does not explicitly teach the following, however, analogues reference in the field of performance evaluation, DiSorbo teaches:
receiving, using a web portal remote from the mobile asset, a request from a user comprising identification of the specified operator and a specified time range (Fig. 24 an example of the report interface whereby admin may select a driver and a date range to generate a report for the selected driver's activity during the selected date range);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teaching of Fujii incorporate the teachings of DiSorbo to include receiving, using a web portal remote from the mobile asset, a request from a user comprising identification of the specified operator and a specified time range as part of the scoring system of Fujii. Doing so would provide the use of a single portal for drivers and administrator(s) configured to dynamically manage moves while efficiently allocating resources [0009].
While Fujii teaches in [0095] Based on all the data gathered from the trip, the system computes a trip-based driver scoring for a cumulative overall driver score 716. Next, the system computes the collected vehicle sensor and/or video meta data using artificial intelligence and/or machine learning based on data collection and analysis to develop driving scores including scores for risk and safety 718. If no more risky events are detected and it is determined that the trip has ended 720, the system stops collecting data [0092] FIG. 7 is a flow diagram illustrating an exemplary method 700 for calculating a driver score of a driver of a vehicle. As defined above, the driver may be a person or the vehicle itself if the self-driving feature is engaged. [0093]-[0097] For example, if the driver suddenly brakes, the vehicle module may check the corresponding sensor data over a 10 second time frame to determine the time of the greatest deceleration. When the vehicle module determines that a risky event has occurred, a severity level between 0 and 3, (or between 0 and 4) for example, is assigned and the trip summary is updated. The system may automatically assume a trip severity of 4 for every minute of the trip where the trip score is a number from 0-100, after taking into consideration the trip's configured typical severity (or predetermined severity or threshold) and the actual severity per minute. Table 1 further illustrates scores associated with safe operation of vehicle. Fujii does not explicitly teach the following, however, analogues reference in the field of performance evaluation, Roddy teaches:
receiving, using the data acquisition and recording system onboard the mobile asset ([0026] The mobile assets, e.g., 12 or 26, may be equipped with a plurality of sensors for monitoring a plurality of operating parameters representative of the condition of the remote asset and of the efficiency of its operation), a set of second data related to the specified operator and the specified time range, the set of second data comprising a subset of the set of first data, based on at least one data signal from at least one of: the at least one data source onboard the mobile asset; and the at least one data source remote from the mobile asset ([0026] Data regarding the location of the mobile asset and its operating parameters may be transferred periodically or in real time to a data base. [0030] In order to effectively utilize the vast amount of data that may be available regarding a fleet of mobile assets, the output of the analysis 48 of such data must be effectively displayed and conveyed to an interested user 14. For example, while the location of the mobile asset may be seen on map 190, by double clicking a cursor on the symbol for a single mobile asset, driver information, and other operating information for that mobile asset may be viewed on nested web pages [0054] an operational parameter data storage unit may be optionally searched to collect, at 455, respective observations of operational parameter data occurring over a predetermined period of time prior to the repair. [0055] At 456, the number of times each distinct fault occurred during the predetermined period of time is determined);
and displaying, using a display device of the web portal, at least one of the processed data and the score, the processed data comprising at least one video, the displayed processed data adapted to be viewed by the user ([0030] In order to effectively utilize the vast amount of data that may be available regarding a fleet of mobile assets, the output of the analysis 48 of such data must be effectively displayed and conveyed to an interested user 14. [0032] it may be advantageous to include video information on such a web site, such as still or animated video produced by the operator of the locomotive and transmitted directly from the mobile asset to show the condition of a component. Such video information may be accompanied by live audio information, including speech from the operator, thereby allowing the user 14, the operator located on the mobile asset, and personnel at a service center 22 to conference regarding a developing anomaly).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teaching of Fujii and DiSorbo to incorporate the teachings of Roddy to include receiving, using the data acquisition and recording system onboard the mobile asset, a set of second data related to the specified operator and the specified time range, the set of second data comprising a subset of the set of first data, based on at least one data signal from at least one of: the at least one data source onboard the mobile asset; and the at least one data source remote from the mobile asset and and displaying, using a display device of the web portal, at least one of the processed data and the score, the processed data comprising at least one video, the displayed processed data adapted to be viewed by the user as part of the system of Fujii. Doing so would improve the efficiency of operations of the assets to remain competitive in the market place. [0003].
While Fujii teaches in [0095] Based on all the data gathered from the trip, the system computes a trip-based driver scoring for a cumulative overall driver score 716. Next, the system computes the collected vehicle sensor and/or video meta data using artificial intelligence and/or machine learning based on data collection and analysis to develop driving scores including scores for risk and safety 718. If no more risky events are detected and it is determined that the trip has ended 720, the system stops collecting data [0092] FIG. 7 is a flow diagram illustrating an exemplary method 700 for calculating a driver score of a driver of a vehicle. As defined above, the driver may be a person or the vehicle itself if the self-driving feature is engaged. [0093]- [0097] For example, if the driver suddenly brakes, the vehicle module may check the corresponding sensor data over a 10 second time frame to determine the time of the greatest deceleration. When the vehicle module determines that a risky event has occurred, a severity level between 0 and 3, (or between 0 and 4) for example, is assigned and the trip summary is updated. The system may automatically assume a trip severity of 4 for every minute of the trip where the trip score is a number from 0-100, after taking into consideration the trip's configured typical severity (or predetermined severity or threshold) and the actual severity per minute. Table 1 further illustrates scores associated with safe operation of vehicle. Fujii does not explicitly teach the following, however, analogues reference in the field of performance evaluation, Zielger teaches:
the comparison resulting in a score for the operator, the score recommending one of certifying and decertifying the operator ([0169]"performance tuning" may be utilized as a way to rank authorized and licensed/certified operators according to experience and skill, and to adjust the operating characteristics of the mobile asset 12 accordingly. For example, operator performance ratings such as P1, P2 and P3 can be used to differentiate authorized operators, where P3 may correspond to a beginner, P2 may correspond to an intermediate skilled operator and P1 may correspond to an advanced skilled operator. [0170] As an authorized operator's performance rating is improved, the mobile asset may unlock or otherwise enable advanced features, modify features and mobile asset capabilities and/or otherwise adjust one or more operating characteristics to match the capability of the operator).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teaching of Fujii, DiSorbo, and Roddy to incorporate the teachings of Zielger to include the comparison resulting in a score for the operator, the score recommending one of certifying and decertifying the operator as part of the system of Fujii. Doing so would improve the efficiency and accuracy of business operations. [0002].
Claim 3/19
Fujii further teaches:
The method of claim 1, further including at least one of: at least one video recorder located in at least one of in the mobile asset, on the mobile asset, and in the vicinity of the mobile asset; at least one sound recorder located in at least one of in the mobile asset, on the mobile asset, and in the vicinity of the mobile asset; at least one accelerometer on board the mobile asset; at least one of at least one gyro meter and at least one gyroscope onboard the mobile asset; and at least one magnetometer onboard the mobile asset([0008] triggering recording of video data using a camera on-board the vehicle; [0061] The GPU inputs can be received from vehicle cameras, such as dashboard cameras and driving assistance cameras. [0048] The vehicle module 104 gathers driver data using the various sensors 114 provided in the vehicle (e.g., speed sensors, accelerometers, GPS locators, tire pressure sensors, self-driving sensors, and Audio/Visual sensors, such as backup cameras, and anti-theft theft devices) that are typically connected to the ECU via a Controller Area Network (CAN) bus for example).
Claim 4/20
Fujii further teaches:
The method of claim 1, the set of first data further comprising at least one of event data recorder data, accelerometer data, gyrometer data, gyroscope data, fuel volume data, microphone data, inward facing 360 degrees camera data, outward facing 360 degrees camera data, inward facing fixed camera data, and outward facing fixed camera data(([0008] triggering recording of video data using a camera on-board the vehicle; [0061] The GPU inputs can be received from vehicle cameras, such as dashboard cameras and driving assistance cameras. [0048] The vehicle module 104 gathers driver data using the various sensors 114 provided in the vehicle (e.g., speed sensors, accelerometers, GPS locators, tire pressure sensors, self-driving sensors, and Audio/Visual sensors, such as backup cameras, and anti-theft theft devices) that are typically connected to the ECU via a Controller Area Network (CAN) bus for example)).
Claim8/24
While Fujii teaches in [0095] Based on all the data gathered from the trip, the system computes a trip-based driver scoring for a cumulative overall driver score 716. Next, the system computes the collected vehicle sensor and/or video meta data using artificial intelligence and/or machine learning based on data collection and analysis to develop driving scores including scores for risk and safety 718. If no more risky events are detected and it is determined that the trip has ended 720, the system stops collecting data [0092] FIG. 7 is a flow diagram illustrating an exemplary method 700 for calculating a driver score of a driver of a vehicle. As defined above, the driver may be a person or the vehicle itself if the self-driving feature is engaged. [0093]-[0097] For example, if the driver suddenly brakes, the vehicle module may check the corresponding sensor data over a 10 second time frame to determine the time of the greatest deceleration. When the vehicle module determines that a risky event has occurred, a severity level between 0 and 3, (or between 0 and 4) for example, is assigned and the trip summary is updated. The system may automatically assume a trip severity of 4 for every minute of the trip where the trip score is a number from 0-100, after taking into consideration the trip's configured typical severity (or predetermined severity or threshold) and the actual severity per minute. Table 1 further illustrates scores associated with safe operation of vehicle. Fujii does not explicitly teach the following, however, analogues reference in the field of performance evaluation, Roddy teaches:
The method of claim 1, wherein the data acquisition and recording system receives the set of first data and the set of second data via at least one of a wireless data link and a wired data link ([0058] An apparatus configured to accomplish communication actions is generally identified by numeral 110 of FIG. 5, and it comprises one or more communication elements 112 and a monitoring station 114. The communication element(s) 112 are carried by the remote vehicle, for example locomotive 12 or truck. The communication element(s) may comprise a cellular modem, a satellite transmitter or similar well-known means or methods for conveying wireless signals over long distances. Signals transmitted by communication element 112 are received by monitoring station 114 that, for example, may be the maintenance facility 22 or data center 18 of FIG. 1. Monitoring station 114 includes appropriate hardware and software for receiving and processing vehicle system parameter data signals generated by locomotive 12 or truck 26 from a remote location).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teaching of Fujii to incorporate the teachings of Roddy to include data acquisition and recording system receives the set of first data and the set of second data via at least one of a wireless data link and a wired data link as part of the system of Fujii. Doing so would improve the efficiency of operations of the assets to remain competitive in the market place. [0003].
Claim 12/28
Fujii teaches:
The method of claim 1, the set of first data further comprising at least one of fuel data, weather data, train consist data, crew data, time data, and movement authority data for a specified course of movement of the mobile asset([0048] The vehicle module 104 gathers driver data using the various sensors 114 provided in the vehicle (e.g., speed sensors, accelerometers, GPS locators, tire pressure sensors, self-driving sensors, and Audio/Visual sensors, such as backup cameras, and anti-theft theft devices) that are typically connected to the ECU via a Controller Area Network (CAN) bus for example. [0053] The ECU 118 can monitor and set the air-fuel mixture, the ignition timing, and the idle speed, for example. [0094] Upon detection of an event, the system may use the 60 seconds of video data to save 10 seconds of video before and 10 seconds after the event timestamp).
Claim 13/29
While Fujii teaches in [0095] Based on all the data gathered from the trip, the system computes a trip-based driver scoring for a cumulative overall driver score 716. Next, the system computes the collected vehicle sensor and/or video meta data using artificial intelligence and/or machine learning based on data collection and analysis to develop driving scores including scores for risk and safety 718. If no more risky events are detected and it is determined that the trip has ended 720, the system stops collecting data [0092] FIG. 7 is a flow diagram illustrating an exemplary method 700 for calculating a driver score of a driver of a vehicle. As defined above, the driver may be a person or the vehicle itself if the self-driving feature is engaged. [0093]-[0097] For example, if the driver suddenly brakes, the vehicle module may check the corresponding sensor data over a 10 second time frame to determine the time of the greatest deceleration. When the vehicle module determines that a risky event has occurred, a severity level between 0 and 3, (or between 0 and 4) for example, is assigned and the trip summary is updated. The system may automatically assume a trip severity of 4 for every minute of the trip where the trip score is a number from 0-100, after taking into consideration the trip's configured typical severity (or predetermined severity or threshold) and the actual severity per minute. Table 1 further illustrates scores associated with safe operation of vehicle. Fujii does not explicitly teach the following, however, analogues reference in the field of performance evaluation, Roddy teaches:
The method of claim 1, wherein displaying the processed data includes displaying at least one of critical geographic zones of operation, c