Prosecution Insights
Last updated: April 19, 2026
Application No. 18/341,583

ACTION SEGMENT ESTIMATION MODEL BUILDING DEVICE, METHOD, AND NON-TRANSITORY RECORDING MEDIUM

Final Rejection §103§112
Filed
Jun 26, 2023
Examiner
BONANSINGA, AARON TIMOTHY
Art Unit
2673
Tech Center
2600 — Communications
Assignee
Fujitsu Limited
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
19 granted / 25 resolved
+14.0% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
29 currently pending
Career history
54
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
69.6%
+29.6% vs TC avg
§102
10.3%
-29.7% vs TC avg
§112
9.2%
-30.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 25 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Specification Objections The objection to the specification for minor informalities has been withdrawn. Claim Rejections - 35 USC § 112 The rejections under 35 USC § 112(b) for claims 2-3, 7-8, and 12-13 are maintained while the rejections for claims 5, 10 and 15 have been withdrawn in light of the amendments filed on 12/16/2025. Claim Rejections - 35 USC § 103 Applicant’s arguments (see remarks), filed 12/16/2025, with respect to claims 1-3, 5-8, 10-13 and 14-15, have been fully considered but are respectively unpersuasive. On page 12, applicant argues “It is respectfully submitted that Xiaochun and Nakamura, taken individually or in combination (the propriety of any such combination not being admitted), fail to disclose or suggest at least the above combination of features of amended claim 1:“. In response, the Office respectively disagrees. Based on the breadth of the claim language the prior art by XIAOCHUN et al. (Xiaochun, Luo et al., “Capturing and Understanding Workers’ Activities in Far-Field Surveillance Videos with Deep Action Recognition and Bayesian Nonparametric Learning”, Computer-Aided Civil and Infrastructure Engineering, October 2018, pages 1-19, DOI:10.1111/mice.12419) explicitly teaches an action segment estimation model building device (Fig. 1. Abstract-XIAOCHUN discloses Bayesian nonparametric hidden semi-Markov model was innovatively used to model and infer workers’ activities based on action sequences. Please also read Pg. [07], Col. [01], Para. [04]) comprising: in a hidden semi-Markov model (Fig. 1. Pg. [02], Col. [02], Para. [02]- XIAOCHUN discloses this study set out to develop a hierarchical statistical method for recognizing workers’ activities in far-fields surveillance videos. First, the temporal segment networks (TSNs) (Wang et al., 2015) were used to recognize workers’ actions, and a new fusion strategy was proposed to consider the characteristics of far-field surveillance videos. Second, the hierarchical Dirichlet process-hidden semi-Markov model (HDP-HSMM) (Johnson and Willsky, 2012) was employed to model and infer workers’ latent states), including a plurality of second hidden Markov models each containing a plurality of first hidden Markov models using types of movement of a person as states, and the plurality of second hidden Markov models each using actions defined by combining a plurality of the movements as states, learn observation probabilities for each of the movement types of the plurality of first hidden Markov models using unsupervised learning (Fig. 1. Pg. [06], Col. [01], Para. [04]-XIAOCHUN discloses figure 1 shows the probabilistic graphical model of the HDP-HSMM (Johnson and Willsky, 2013), which takes action sequences as input, models and clusters typical workers’ activities, and segments action sequences temporally. In Bayesian inference, workers’ activities are referred to as latent states, while basic actions are referred to as observations. The HDP-HSMM adopted in this study is with explicit duration semi-Markovianity, which means each state’s duration is given an explicit distribution. Further at Pg. [07], Col. [01], Para. [01]-XIAOCHUN discloses the observation sequence (ys) of state zs can be drawn from the observation distribution given parameters θzs. Please also read pg. [07], Col. [02], Para. [03]); Although XIACHUN explicitly teaches fix the learnt observation probabilities, generate second supervised data by augmenting input first supervised data (Fig. 1. Pg. [05], Col. [01], Para. [03]-XIAOCHUN discloses there were two steps for preparing the observational data: extracting spatial and temporal streams and recognizing basic actions with the TSNs. At Pg. [05], Col. [01], Para. [03]-XIAOCHUN discloses to start the tracking process, each worker of interest was manually selected by putting a bounding box, which is the minimum rectangle enclosing the worker (wherein action clips were created from the frames of the tracking process and the temporal/spatial CNN (i.e. “TSN”) in Wang et al. (2016) was used for both training and testing to create and output snippets with preliminary predictions for action classes in the spatial and temporal directions). At Pg. [07], Col. [02], Para. [03]-XIAOCHUN discloses a total of 540 clips were manually selected to construct the training and test data sets for the TSNs. These action clips were manually categorized into seven action classes. Further at pg. 9, Col. [02], para. [02]-XIAOCHUN discloses the testing process used the same settings regarding augmentation with Wang et al. (2016)), and learn transition probabilities of the movements of the first hidden Markov models in which the second supervised data is used (Fig. 1. Pg. [06], Col. [01], Para. [04]-XIAOCHUN discloses the model employs an HDP (Teh et al., 2006) to define a global random probability measure. At pg. [06], Col. [01], Para. [04]-XIAOCHUN discloses the random measures are independently distributed according to the Dirichlet process and linked by being drawn from the same discrete measure β, thus E[πi]=β. πi can be interpreted as probability measures on the positive integers, which are the identifiers of the observations in the ith activity. Further at Pg. [06], Col. [02], Para. [01]-XIAOCHUN discloses each πi can be interpreted as the transition distribution from state i, namely the ith row of the transition matrix of the HSMM (wherein each state zs can be drawn from the transition distribution πzs−1, where zs−1 indexes the previous state and the observation sequence (ys) of state zs can be drawn from the observation distribution given parameters θz). Please see equations (5-10) and read pg. 10, Col. [01], para. [06-07]); and build the hidden semi-Markov model that is a model for estimating segments of the actions by using the learnt observation probabilities and the learnt transition probabilities learnt (Fig. 1. Pg. [09], Col. [02], Para. [03]-XIAOCHUN discloses the architectures of the spatial and temporal CNNs for testing proposed in Wang et al. (2016) were used. The testing process used the same settings regarding augmentation with Wang et al. (2016). To implement augmentation, all RGB frames and optical flow images were reshaped to 340 ×256 pixels before feeding to the CNNs, and a sliding window of224 ×224 pixels was used on the reshaped images, generating ten augmented samples (wherein action clips were created from the frames of the tracking process and the temporal/spatial CNN (i.e. “TSN”) in Wang et al. (2016) was used for both training and testing to create and output snippets with preliminary predictions for action classes in the spatial and temporal directions). Please also read pg. 9, Col. [02], para. [02]), wherein the first supervised data is augmented by adding teacher information of the first supervised data to each item of data generated by at least one of oversampling in a time direction or oversampling in a feature space, and wherein the oversampling in the feature space is performed (Fig. 1. pg. [05], Col. [02], para. [03]-XIAOCHUN discloses in the Temporal Segment Networks, each snippet in the sequence will produce its preliminary prediction of the action classes, and then a consensus among the snippets will be derived as the clip-level prediction. The spatial stream and the temporal stream of an action t are divided into K segments uniformly along the temporal dimension. A spatial snippet (i.e., an RGBimage) will be sampled randomly from each segment of the spatial stream. The spatial CNN takes the K images as input and produces S=S1,S2,...,SK as output, where Si=(s1,s2,...,sN) is an action classification score vector of the ith snippet. A temporal snippet, which is a stack of x-direction and y-direction optical flow field images, is sampled from each segment of the temporal stream. The temporal CNN takes the K image stacks as input and produces T=T1,T2,...,TK, where Tj=(t1,t2,...,tN) is an action classification). XIAOCHUN fail to explicitly teach learn transition probabilities of the movements of the first hidden Markov models by supervised learning in which the second supervised data is used; and wherein the oversampling in the feature space is performed by adding noise related to a speed of each body location of a person performing a movement in the first supervised data to a feature value of the movement for each body location. However, NAKAMURA explicitly teaches learn transition probabilities of the movements of the first hidden Markov models (Fig. 3. Pg. [03], Col. [02], Para. [02]-NAKAMURA discloses we propose GP-HSMM (Gaussian process–hidden semi-Markov model), a novel method to divide time series motion data into unit actions by using a stochastic model to estimate their lengths and classes. The proposed method involves a hidden semi-Markov model (HSMM) with a Gaussian process (GP) emission distribution, where each state represents a unit action. At Pg. [04], Col. [01], Para. [02]-NAKAMURA discloses we utilize Gaussian process regression, which learns emission xi of time step i in a segment. This makes it possible to represent each unit action as part of a continuous trajectory. If we obtain pairs (i, Xc) of emissions xi of time step i of segments belonging to the same class c, a predictive distribution whereby the emission of time step i becomes x follows a Gaussian distribution. At Pg. [04], Col. [02], Para. [02]-NAKAMURA discloses we use the blocked Gibbs sampler, which samples segments and their classes in an observed sequence. In the initialization phase, all observed sequences are first randomly divided into segments. Segments xnj(j = 1, 2, · · · , Jn) in observed sequence sn are then removed from the learning data, and parameter Xc of the Gaussian process and transition probability P(c|c′) of HSMM are updated) by supervised learning in which the second supervised data is used (Fig. 12. Pg. [08], Col. [02], Para. [02]-NAKAMURA discloses we then applied our proposed method to more complex motion capture data, which consisted of the basic motions of karate (called kata in Japanese) as shown in Figure 10 from a motion capture library. There are fixed motion patterns (punches or guards) in kata, and it is easy to form a ground truth for the segmentation. At pg. [07], Col. [02], para. [01]-NAKAMURA discloses we computed the normalized Hamming distance between the unsupervised segmentation and the ground truth (wherein c and ¯c represents sequences of estimated motion classes and true motion classes)); and wherein adding noise related to a speed of each body location of a person performing a movement in the first supervised data to a feature value of the movement for each body location (Fig. 3. Pg. [04], Col [01], para. [02]-NAKAMURA discloses we utilize Gaussian process regression, which learns emission xi of time step i in a segment. This makes it possible to represent each unit action as part of a continuous trajectory. If we obtain pairs (i, Xc) of emissions xi of time step i of segments belonging to the same class c, a predictive distribution whereby the emission of time step i becomes x follows a Gaussian distribution. is a hyperparameter that represents noise in the observation. In Equation (3), k is a vector containing the elements k(ip, i), and c is a scalar value k(i, i). Using the kernel function, GP can learn a time-series sequence that contains complex changes. Please also read Pg. [04], Col. [02]. Para. [01-02]). On page 14, applicant argues “Xiaochun and Nakamura, taken alone or in combination (the propriety of any such combination not being admitted), fail to disclose or suggest at least the above-discussed features of the current claims“. In response, the Office respectively disagrees for the reasons stated above and below. On page 14, applicant argues “The Applicant therefore respectfully submits that independent claim 1, as amended, is allowable over the cited art for at least the above reasons. Moreover, although independent claims “. In response, the Office respectively disagrees for the reasons stated above and below. On page 15, applicant argues “Finally, the Applicant notes that dependent claims 2, 3, 5, 7, 8, 10, 12, 13, and 15 each depend from allowable independent claims 1, 6 or 11. As such, each of these claims are also patentable over the cited art at least by virtue of dependency from an allowable claim, as well as for the additional subject matter recited therein. “. In response, the Office respectively disagrees for the reasons stated above and below. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2-3, 7-8, and 12-13 along with their dependent claims are rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 2 recites “the oversampling in the time direction is performed by propagating, for each clock-time, an original parameter randomly set at the clock-time to times before and after clock-times while attenuating the original parameter” in the first limitation and the second limitation “at each clock-time, a feature value of a movement corresponding to a clock-time of a maximum parameter among the original parameter and parameters propagated from the before and after clock-times is selected as a feature value for each of the clock-times”. The Office understands that for each clock time an original parameter is propagated, and at each clock time, the original parameter is randomly set based on a group of values before and after clock times. However, it is unclear what value the original parameter is specifically being set to, what the applicant means by “before and after clock-times” and which clock-times are being referenced because an original parameter is not only propagated for each clock time but the parameter is randomly set “at the clock times to times before and after clock-times” while also at the same time being attenuated. For purpose of examination the examiner is interpreting the limitation as stated in paragraph [0051]: “At step 252, the CPU 51 propagates a value of the stretch strength generated for each clock-time to times before and after this clock-time while attenuating the value”. The Office respectfully requests the Applicant to amend claim 2 in order to clarify the claimed invention. Claim 3 recites “wherein the original parameter is attenuated so as to become zero after a predetermined number of clock-times distant” in the first limitation. The claim is unclear due to its dependence on claim 2, and because it is not clear what the applicant means by “clock-times distant”. For purpose of examination the examiner is interpreting the limitation as stated in paragraph [0022]: “…attenuation is performed so as to become zero at a clock-time three clock-times distant.”. The office respectfully requests the Applicant to amend claim 3 in order to clarify the claimed invention. Claim 7 recites “the oversampling in the time direction is performed by propagating, for each clock-time, an original parameter randomly set at the clock-time to times before and after clock-times while attenuating the original parameter” in the first limitation and the second limitation “at each clock-time, a feature value of a movement corresponding to a clock-time of a maximum parameter among the original parameter and parameters propagated from the before and after clock-times is selected as a feature value for each of the clock-times”. The Office understands that for each clock time an original parameter is propagated, and at each clock time, the original parameter is randomly set based on a group of values before and after clock times. However, it is unclear what value the original parameter is specifically being set to, what the applicant means by “before and after clock-times” and which clock-times are being referenced because an original parameter is not only propagated for each clock time but the parameter is randomly set “at the clock times to times before and after clock-times” while also at the same time being attenuated. For purpose of examination the examiner is interpreting the limitation as stated in paragraph [0051]: “At step 252, the CPU 51 propagates a value of the stretch strength generated for each clock-time to times before and after this clock-time while attenuating the value”. The Office respectfully requests the Applicant to amend claim 7 in order to clarify the claimed invention. Claim 8 recites “wherein the original parameter is attenuated so as to become zero after a predetermined number of clock-times distant” in the first limitation. The office finds the terms “predetermined number of clock-times distant” to render the claim indefinite. The claim is unclear because it is not clear what the applicant means by “clock-times distant” and due to its dependence on claim 7. For purpose of examination the examiner is interpreting the limitation as stated in paragraph [0022]: “…attenuation is performed so as to become zero at a clock-time three clock-times distant.”. The office respectfully requests the Applicant to amend claim 8 in order to clarify the claimed invention. Claim 12 recites “propagating an original parameter randomly set, at each clock-time, to before and after clock-times” in the first limitation and the second limitation “at each clock-time, a feature value of a movement corresponding to a clock-time of a maximum parameter among the original parameter and parameters propagated from the before and after clock-times is selected as a feature value for each of the clock-times”. The Office understands that for each clock time an original parameter is propagated, and at each clock time, the original parameter is randomly set based on a group of values before and after clock times. However, it is unclear what value the original parameter is specifically being set to, what the applicant means by “before and after clock-times” and which clock-times are being referenced because an original parameter is not only propagated for each clock time but the parameter is randomly set “at the clock times to times before and after clock-times” while also at the same time being attenuated. For purpose of examination the examiner is interpreting the limitation as stated in paragraph [0051]: “At step 252, the CPU 51 propagates a value of the stretch strength generated for each clock-time to times before and after this clock-time while attenuating the value”. The Office respectfully requests the Applicant to amend claim 12 in order to clarify the claimed invention. Claim 13 recites “wherein the original parameter is attenuated so as to become zero after a predetermined number of clock-times distant” in the first limitation. The office finds the terms “predetermined number of clock-times distant” to render the claim indefinite. The claim is unclear because it is not clear what the applicant means by “clock-times distant” and due to its dependence on claim 12. For purpose of examination the examiner is interpreting the limitation as stated in paragraph [0022]: “…attenuation is performed so as to become zero at a clock-time three clock-times distant.”. The office respectfully requests the Applicant to amend claim 13 in order to clarify the claimed invention. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 6-8, 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over XIAOCHUN et al. (Xiaochun, Luo et al., “Capturing and Understanding Workers’ Activities in Far-Field Surveillance Videos with Deep Action Recognition and Bayesian Nonparametric Learning”, Computer-Aided Civil and Infrastructure Engineering, October 2018, pages 1-19, DOI:10.1111/mice.12419), hereinafter referenced as XIAOCHUN in view of NAKAMURA et al. (NAKAMURA, Tomoaki et al., “Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes”, Front. Neurorobot, 20 December 2017, Volume 11 – 2017, https://doi.org/10.3389/fnbot.2017.00067), hereinafter referenced as NAKAMURA. Regarding claim 1, XIAOCHUN explicitly teaches an action segment estimation model building device (Fig. 1. Abstract-XIAOCHUN discloses Bayesian nonparametric hidden semi-Markov model was innovatively used to model and infer workers’ activities based on action sequences. Please also read Pg. [07], Col. [01], Para. [04]) comprising: a memory (Fig. 1. Pg. [07], Col. [02], Para. [01]-XIAOCHUN discloses the computation was conducted with a PC equipped with a memory of 32 GB); and a processor connected to the memory (Pg. [07], Col. [01], Para. [04]-XIAOCHUN discloses the computation was conducted with a PC equipped with an NVidia Geforce GTX 1080Ti GPU, an Intel Xeon CPU E5-2630), the processor being configured to: in a hidden semi-Markov model (Fig. 1. Pg. [02], Col. [02], Para. [02]- XIAOCHUN discloses this study set out to develop a hierarchical statistical method for recognizing workers’ activities in far-fields surveillance videos. First, the temporal segment networks (TSNs) (Wang et al., 2015) were used to recognize workers’ actions, and a new fusion strategy was proposed to consider the characteristics of far-field surveillance videos. Second, the hierarchical Dirichlet process-hidden semi-Markov model (HDP-HSMM) (Johnson and Willsky, 2012) was employed to model and infer workers’ latent states), including a plurality of second hidden Markov models each containing a plurality of first hidden Markov models using types of movement of a person as states, and the plurality of second hidden Markov models each using actions defined by combining a plurality of the movements as states, learn observation probabilities for each of the movement types of the plurality of first hidden Markov models using unsupervised learning (Fig. 1. Pg. [06], Col. [01], Para. [04]-XIAOCHUN discloses figure 1 shows the probabilistic graphical model of the HDP-HSMM (Johnson and Willsky, 2013), which takes action sequences as input, models and clusters typical workers’ activities, and segments action sequences temporally. In Bayesian inference, workers’ activities are referred to as latent states, while basic actions are referred to as observations. The HDP-HSMM adopted in this study is with explicit duration semi-Markovianity, which means each state’s duration is given an explicit distribution. Further at Pg. [07], Col. [01], Para. [01]-XIAOCHUN discloses the observation sequence (ys) of state zs can be drawn from the observation distribution given parameters θzs. Please also read pg. [07], Col. [02], Para. [03]); Although XIACHUN explicitly teaches fix the learnt observation probabilities, generate second supervised data by augmenting input first supervised data (Fig. 1. Pg. [05], Col. [01], Para. [03]-XIAOCHUN discloses there were two steps for preparing the observational data: extracting spatial and temporal streams and recognizing basic actions with the TSNs. At Pg. [05], Col. [01], Para. [03]-XIAOCHUN discloses to start the tracking process, each worker of interest was manually selected by putting a bounding box, which is the minimum rectangle enclosing the worker (wherein action clips were created from the frames of the tracking process and the temporal/spatial CNN (i.e. “TSN”) in Wang et al. (2016) was used for both training and testing to create and output snippets with preliminary predictions for action classes in the spatial and temporal directions). At Pg. [07], Col. [02], Para. [03]-XIAOCHUN discloses a total of 540 clips were manually selected to construct the training and test data sets for the TSNs. These action clips were manually categorized into seven action classes. Further at pg. 9, Col. [02], para. [02]-XIAOCHUN discloses the testing process used the same settings regarding augmentation with Wang et al. (2016)), and learn transition probabilities of the movements of the first hidden Markov models in which the second supervised data is used (Fig. 1. Pg. [06], Col. [01], Para. [04]-XIAOCHUN discloses the model employs an HDP (Teh et al., 2006) to define a global random probability measure. At pg. [06], Col. [01], Para. [04]-XIAOCHUN discloses the random measures are independently distributed according to the Dirichlet process and linked by being drawn from the same discrete measure β, thus E[πi]=β. πi can be interpreted as probability measures on the positive integers, which are the identifiers of the observations in the ith activity. Further at Pg. [06], Col. [02], Para. [01]-XIAOCHUN discloses each πi can be interpreted as the transition distribution from state i, namely the ith row of the transition matrix of the HSMM (wherein each state zs can be drawn from the transition distribution πzs−1, where zs−1 indexes the previous state and the observation sequence (ys) of state zs can be drawn from the observation distribution given parameters θz). Please see equations (5-10) and read pg. 10, Col. [01], para. [06-07]); and build the hidden semi-Markov model that is a model for estimating segments of the actions by using the learnt observation probabilities and the learnt transition probabilities learnt (Fig. 1. Pg. [09], Col. [02], Para. [03]-XIAOCHUN discloses the architectures of the spatial and temporal CNNs for testing proposed in Wang et al. (2016) were used. The testing process used the same settings regarding augmentation with Wang et al. (2016). To implement augmentation, all RGB frames and optical flow images were reshaped to 340 ×256 pixels before feeding to the CNNs, and a sliding window of224 ×224 pixels was used on the reshaped images, generating ten augmented samples (wherein action clips were created from the frames of the tracking process and the temporal/spatial CNN (i.e. “TSN”) in Wang et al. (2016) was used for both training and testing to create and output snippets with preliminary predictions for action classes in the spatial and temporal directions). Please also read pg. 9, Col. [02], para. [02]), wherein the first supervised data is augmented by adding teacher information of the first supervised data to each item of data generated by at least one of oversampling in a time direction or oversampling in a feature space, and wherein the oversampling in the feature space is performed (Fig. 1. pg. [05], Col. [02], para. [03]-XIAOCHUN discloses in the Temporal Segment Networks, each snippet in the sequence will produce its preliminary prediction of the action classes, and then a consensus among the snippets will be derived as the clip-level prediction. The spatial stream and the temporal stream of an action t are divided into K segments uniformly along the temporal dimension. A spatial snippet (i.e., an RGBimage) will be sampled randomly from each segment of the spatial stream. The spatial CNN takes the K images as input and produces S=S1,S2,...,SK as output, where Si=(s1,s2,...,sN) is an action classification score vector of the ith snippet. A temporal snippet, which is a stack of x-direction and y-direction optical flow field images, is sampled from each segment of the temporal stream. The temporal CNN takes the K image stacks as input and produces T=T1,T2,...,TK, where Tj=(t1,t2,...,tN) is an action classification). XIAOCHUN fail to explicitly teach learn transition probabilities of the movements of the first hidden Markov models by supervised learning in which the second supervised data is used; and wherein the oversampling in the feature space is performed by adding noise related to a speed of each body location of a person performing a movement in the first supervised data to a feature value of the movement for each body location. However, NAKAMURA explicitly teaches learn transition probabilities of the movements of the first hidden Markov models (Fig. 3. Pg. [03], Col. [02], Para. [02]-NAKAMURA discloses we propose GP-HSMM (Gaussian process–hidden semi-Markov model), a novel method to divide time series motion data into unit actions by using a stochastic model to estimate their lengths and classes. The proposed method involves a hidden semi-Markov model (HSMM) with a Gaussian process (GP) emission distribution, where each state represents a unit action. At Pg. [04], Col. [01], Para. [02]-NAKAMURA discloses we utilize Gaussian process regression, which learns emission xi of time step i in a segment. This makes it possible to represent each unit action as part of a continuous trajectory. If we obtain pairs (i, Xc) of emissions xi of time step i of segments belonging to the same class c, a predictive distribution whereby the emission of time step i becomes x follows a Gaussian distribution. At Pg. [04], Col. [02], Para. [02]-NAKAMURA discloses we use the blocked Gibbs sampler, which samples segments and their classes in an observed sequence. In the initialization phase, all observed sequences are first randomly divided into segments. Segments xnj(j = 1, 2, · · · , Jn) in observed sequence sn are then removed from the learning data, and parameter Xc of the Gaussian process and transition probability P(c|c′) of HSMM are updated) by supervised learning in which the second supervised data is used (Fig. 12. Pg. [08], Col. [02], Para. [02]-NAKAMURA discloses we then applied our proposed method to more complex motion capture data, which consisted of the basic motions of karate (called kata in Japanese) as shown in Figure 10 from a motion capture library. There are fixed motion patterns (punches or guards) in kata, and it is easy to form a ground truth for the segmentation. At pg. [07], Col. [02], para. [01]-NAKAMURA discloses we computed the normalized Hamming distance between the unsupervised segmentation and the ground truth (wherein c and ¯c represents sequences of estimated motion classes and true motion classes)); and wherein adding noise related to a speed of each body location of a person performing a movement in the first supervised data to a feature value of the movement for each body location (Fig. 3. Pg. [04], Col [01], para. [02]-NAKAMURA discloses we utilize Gaussian process regression, which learns emission xi of time step i in a segment. This makes it possible to represent each unit action as part of a continuous trajectory. If we obtain pairs (i, Xc) of emissions xi of time step i of segments belonging to the same class c, a predictive distribution whereby the emission of time step i becomes x follows a Gaussian distribution. is a hyperparameter that represents noise in the observation. In Equation (3), k is a vector containing the elements k(ip, i), and c is a scalar value k(i, i). Using the kernel function, GP can learn a time-series sequence that contains complex changes. Please also read Pg. [04], Col. [02]. Para. [01-02]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of XIAOCHUN of having an action segment estimation model building device, with the teachings of NAKAMURA of having learn transition probabilities of the movements of the first hidden Markov models by supervised learning in which the second supervised data is used; and wherein the oversampling in the feature space is performed by adding noise related to a speed of each body location of a person performing a movement in the first supervised data to a feature value of the movement for each body location. Wherein XIAOCHUN’s system having fix the learnt observation probabilities, generate second supervised data by augmenting input first supervised data, and learn transition probabilities of the movements of the first hidden Markov models by supervised learning in which the second supervised data is used; and build the hidden semi-Markov model that is a model for estimating segments of the actions by using the learnt observation probabilities and the learnt transition probabilities learnt, wherein the first supervised data is augmented by adding teacher information of the first supervised data to each item of data generated by at least one of oversampling in a time direction or oversampling in a feature space, and wherein the oversampling in the feature space is performed by adding noise related to a speed of each body location of a person performing a movement in the first supervised data to a feature value of the movement for each body location. The motivation behind the modification would have been to obtain a system that improves machine learning model training, accuracy and classifications as well as improved resolution, since both XIAOCHUN and NAKAMURA concern image analysis and Markov models. Wherein XIAOCHUN’s systems and methods provides a hierarchical hidden Markov model for capturing and understanding workers’ high-level activities in far-field surveillance videos that can be implemented for objective work sampling, personal physical fatigue, trade-level health risk assessment, and process-based quality control, while NAKAMURA’s systems and methods provide a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments and makes it possible to efficiently search for all possible segment lengths and classes in an unsupervised manner. Please see XIAOCHUN et al. (Xiaochun, Luo et al., “Capturing and Understanding Workers’ Activities in Far-Field Surveillance Videos with Deep Action Recognition and Bayesian Nonparametric Learning”, Computer-Aided Civil and Infrastructure Engineering, October 2018, pages 1-19, DOI:10.1111/mice.12419), Abstract and Pg. [16], Col. [01], Para. [01-02], and NAKAMURA et al. (NAKAMURA, Tomoaki et al., “Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes”, Front. Neurorobot, 20 December 2017, Volume 11 – 2017, https://doi.org/10.3389/fnbot.2017.00067), Abstract and Pg. [10], Col. [02], Para. [01-02]. Regarding claim 2, XIAOCHUN in view of NAKAMURA explicitly teaches the action segment estimation model building device of claim 1, XIAOCHUN further teaches wherein: at each clock-time, a feature value of a movement corresponding to a clock-time of a maximum parameter among the original parameter and parameters propagated from the before and after clock-times is selected as a feature value for each of the clock-times (Fig. 1. Pg. [06], Col. [01], Para. [02]-XIAOCHUN discloses Wang et al. (2016) proposed two strategies to aggregate the scores at the segment level to produce the score vector out of each stream: maximum and mean. The maximum strategy uses the score vector of the segment with the highest score element, whereas the mean strategy uses the mean score vector of all segments. Wang et al. (2016) proposed the weighted average strategy at the stream level to produce the final action score vector, which is the weighted average of the score vectors of the two streams. Please also read Pg. [05], Col. [02], Para. [03]). Although XIAOCHUN explicitly teaches the oversampling in the time direction is performed by propagating, for each clock-time, an original parameter randomly set at the clock-time to times before and after clock-times (Fig. 1. Pg. [05], Col. [01], Para. [03]-XIAOCHUN discloses there were two steps for preparing the observational data: extracting spatial and temporal streams and recognizing basic actions with the Temporal Segment Networks (wherein each worker of interest is tracked by a sequence of bounding boxes that are each represented by a frame number and pixel coordinates, action clips with a duration of 3 seconds are created from the frames, and spatial and temporal streams are created based on the action clips). At Pg. [05], Col. [02], Para. [03]-XIAOCHUN discloses in the Temporal Segment networks, each snippet in the sequence will produce its preliminary prediction of the action classes, and then a consensus among the snippets will be derived as the clip-level prediction. The spatial stream and the temporal stream of an action t are divided into K segments uniformly along the temporal dimension. A temporal snippet, which is a stack of x-direction and y-direction optical flow field images, is sampled from each segment of the temporal stream. The temporal CNN takes the K image stacks as input and produces T=T1,T2,...,TK, where Tj=(t1,t2,...,tN) is an action classification). Please also read Pg. [06], Col. [01], Para. [03]). XIAOCHUN fails to explicitly teach while attenuating the original parameter. However, NAKAMURA explicitly teaches while attenuating the original parameter (Fig. 5. Pg. [05], Col. [01], Para. [01]-NAKAMURA discloses after sampling xnj and cnj, parameter Xc of the Gaussian process and transition probability P(c|c′) of the hidden semi-Markov model are updated by adding them to the learning data. The segments and parameters of Gaussian processes are optimized alternately by iteratively performing Algorithm 1. Algorithm 1 shows the pseudocode of the blocked Gibbs sampler. At Pg. [06], Col. [01], Para. [01]-NAKAMURA discloses segment xj and its class are determined by backward sampling length k and class c of the segment, based on forward probabilities. At Pg. [06], Col. [02], Para. [01]-NAKAMURA discloses by iterating this procedure until t = 0, the observed sequence can be divided into segments and their classes can be determined. Please also read Pg. [06], Col. [01], Para. [01]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of XIAOCHUN in view of NAKAMURA of having an action segment estimation model building device, with the teachings of NAKAMURA of having while attenuating the original parameter. Wherein XIAOCHUN’s system having the oversampling in the time direction is performed by propagating, for each clock-time, an original parameter randomly set at the clock-time to times before and after clock-times while attenuating the original parameter. The motivation behind the modification would have been to obtain a system that improves machine learning model training, accuracy and classifications as well as improved resolution, since both XIAOCHUN and NAKAMURA concern cellular image analysis. Wherein XIAOCHUN’s systems and methods provides a hierarchical hidden Markov model for capturing and understanding workers’ high-level activities in far-field surveillance videos that can be implemented for objective work sampling, personal physical fatigue, trade-level health risk assessment, and process-based quality control, while NAKAMURA’s systems and methods provide a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments and makes it possible to efficiently search for all possible segment lengths and classes in an unsupervised manner. Please see XIAOCHUN et al. (Xiaochun, Luo et al., “Capturing and Understanding Workers’ Activities in Far-Field Surveillance Videos with Deep Action Recognition and Bayesian Nonparametric Learning”, Computer-Aided Civil and Infrastructure Engineering, October 2018, pages 1-19, DOI:10.1111/mice.12419), Abstract and Pg. [16], Col. [01], Para. [01-02], and NAKAMURA et al. (NAKAMURA, Tomoaki et al., “Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes”, Front. Neurorobot, 20 December 2017, Volume 11 – 2017, https://doi.org/10.3389/fnbot.2017.00067), Abstract and Pg. [10], Col. [02], Para. [01-02]. Regarding claim 3, XIAOCHUN in view of NAKAMURA explicitly teaches the action segment estimation model building device of claim 2, XIAOCHUN fails to explicitly teach wherein the original parameter is attenuated so as to become zero after a predetermined number of clock-times distant. However, NAKAMURA explicitly teaches wherein the original parameter is attenuated so as to become zero after a predetermined number of clock-times distant (Fig. 6. Pg. [06], Col. [01], Para. [01]-NAKAMURA discloses we regard segments and their classes as latent variables that are sampled by forward filtering-backward sampling (Algorithm 2). Figure 6 depicts the computation of a three dimensional array [t][k][c]. The probability that two samples before time step t become a segment is computed; the resulting segment would be assigned to class two. Samples at t − 1 and t become a segment, and all the segments whose end point is t − 2 can potentially transit to this segment. [t][2][2] can be computed by marginalizing out these possibilities. At Pg. [06], Col. [02], Para. [01]-NAKAMURA discloses from t = T, length k1 and class c1 are determined according to k1, c1 ∼ [T][k][c], and sT−k1 : T becomes a segment whose class is c1. Then, length k2 and class c2 of the next segment are determined according to k2, c2 ∼ [T − k1][k][c]. By iterating this procedure until t = 0, the observed sequence can be divided into segments and their classes can be determined). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of XIAOCHUN in view of NAKAMURA of having an action segment estimation model building device, with the teachings of NAKAMURA of having wherein the original parameter is attenuated so as to become zero after a predetermined number of clock-times distant. Wherein XIAOCHUN’s system having wherein the original parameter is attenuated so as to become zero after a predetermined number of clock-times distant. The motivation behind the modification would have been to obtain a system that improves machine learning model training, accuracy and classifications as well as improved resolution, since both XIAOCHUN and NAKAMURA concern cellular image analysis. Wherein XIAOCHUN’s systems and methods provides a hierarchical hidden Markov model for capturing and understanding workers’ high-level activities in far-field surveillance videos that can be implemented for objective work sampling, personal physical fatigue, trade-level health risk assessment, and process-based quality control, while NAKAMURA’s systems and methods provide a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments and makes it possible to efficiently search for all possible segment lengths and classes in an unsupervised manner. Please see XIAOCHUN et al. (Xiaochun, Luo et al., “Capturing and Understanding Workers’ Activities in Far-Field Surveillance Videos with Deep Action Recognition and Bayesian Nonparametric Learning”, Computer-Aided Civil and Infrastructure Engineering, October 2018, pages 1-19, DOI:10.1111/mice.12419), Abstract and Pg. [16], Col. [01], Para. [01-02], and NAKAMURA et al. (NAKAMURA, Tomoaki et al., “Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes”, Front. Neurorobot, 20 December 2017, Volume 11 – 2017, https://doi.org/10.3389/fnbot.2017.00067), Abstract and Pg. [10], Col. [02], Para. [01-02]. Regarding claim 6, XIAOCHUN explicitly teaches an action segment estimation model building method (Fig. 1. Abstract-XIAOCHUN discloses Bayesian nonparametric hidden semi-Markov model was innovatively used to model and infer workers’ activities based on action sequences. Please also read Pg. [07], Col. [01], Para. [04]) comprising: by a processor (Fig. 1. Pg. [07], Col. [02], Para. [01]-XIAOCHUN discloses the computation was conducted with a PC equipped with a memory of 32 GB), in a hidden semi-Markov model including a plurality of second hidden Markov models each containing a plurality of first hidden Markov models using types of movement of a person as states (Fig. 1. Pg. [02], Col. [02], Para. [02]- XIAOCHUN discloses this study set out to develop a hierarchical statistical method for recognizing workers’ activities in far-fields surveillance videos. First, the temporal segment networks (TSNs) (Wang et al., 2015) were used to recognize workers’ actions, and a new fusion strategy was proposed to consider the characteristics of far-field surveillance videos. Second, the hierarchical Dirichlet process-hidden semi-Markov model (HDP-HSMM) (Johnson and Willsky, 2012) was employed to model and infer workers’ latent states), and the plurality of second hidden Markov models each using actions defined by combining a plurality of the movements as states, learning observation probabilities for each of the movement types of the plurality of first hidden Markov models using unsupervised learning (Fig. 1. Pg. [06], Col. [01], Para. [04]-XIAOCHUN discloses Figure 1 shows the probabilistic graphical model of the HDP-HSMM (Johnson and Willsky, 2013), which takes action sequences as input, models and clusters typical workers’ activities, and segments action sequences temporally. In Bayesian inference, workers’ activities are referred to as latent states, while basic actions are referred to as observations. The HDP-HSMM adopted in this study is with explicit duration semi-Markovianity, which means each state’s duration is given an explicit distribution. Further at Pg. [07], Col. [01], Para. [01]-XIAOCHUN discloses the observation sequence (ys) of state zs can be drawn from the observation distribution given parameters θzs. Please also read pg. [07], Col. [02], Para. [03]); Although XIAOCHUN explicitly teaches fixing the learnt observation probabilities, generating second supervised data by augmenting input first supervised data (Fig. 1. Pg. [05], Col. [01], Para. [03]-XIAOCHUN discloses there were two steps for preparing the observational data: extracting spatial and temporal streams and recognizing basic actions with the TSNs. At Pg. [05], Col. [01], Para. [03]-XIAOCHUN discloses to start the tracking process, each worker of interest was manually selected by putting a bounding box, which is the minimum rectangle enclosing the worker (wherein action clips were created from the frames of the tracking process and the temporal/spatial CNN (i.e. “TSN”) in Wang et al. (2016) was used for both training and testing to create and output snippets with preliminary predictions for action classes in the spatial and temporal directions). At Pg. [07], Col. [02], Para. [03]-XIAOCHUN discloses a total of 540 clips were manually selected to construct the training and test data sets for the TSNs. These action clips were manually categorized into seven action classes. Further at Pg. 9, Col. [02], Para. [02]-XIAOCHUN discloses the testing process used the same settings regarding augmentation with Wang et al. (2016)), and learning transition probabilities of the movements of the first hidden Markov models in which the second supervised data is used (Fig. 1. Pg. [06], Col. [01], Para. [04]-XIAOCHUN discloses the model employs an HDP (Teh et al., 2006) to define a global random probability measure. At pg. [06], Col. [01], Para. [04]-XIAOCHUN discloses the random measures are independently distributed according to the Dirichlet process and linked by being drawn from the same discrete measure β, thus E[πi]=β. πi can be interpreted as probability measures on the positive integers, which are the identifiers of the observations in the ith activity. Further at Pg. [06], Col. [02], Para. [01]-XIAOCHUN discloses each πi can be interpreted as the transition distribution from state i, namely the ith row of the transition matrix of the HSMM (wherein each state zs can be drawn from the transition distribution πzs−1, where zs−1 indexes the previous state and the observation sequence (ys) of state zs can be drawn from the observation distribution given parameters θz). Please see equations (5-10) and read pg. 10, Col. [01], para. [06-07]); building the hidden semi-Markov model that is a model for estimating segments of the actions by using the learnt observation probabilities and the learnt transition probabilities (Fig. 1. Pg. [09], Col. [02], Para. [03]-XIAOCHUN discloses the architectures of the spatial and temporal CNNs for testing proposed in Wang et al. (2016) were used. The testing process used the same settings regarding augmentation with Wang et al. (2016). To implement augmentation, all RGB frames and optical flow images were reshaped to 340 ×256 pixels before feeding to the CNNs, and a sliding window of224 ×224 pixels was used on the reshaped images, generating ten augmented samples (wherein action clips were created from the frames of the tracking process and the temporal/spatial CNN (i.e. “TSN”) in Wang et al. (2016) was used for both training and testing to create and output snippets with preliminary predictions for action classes in the spatial and temporal directions). Please also read pg. 9, Col. [02], para. [02]), wherein the action segment estimation model building method augments the first supervised data by adding teacher information of the first supervised data to each item of data generated by at least one of oversampling in a time direction or oversampling in a feature space (Fig. 1. Pg. [05], Col. [02], Para. [03]-XIAOCHUN discloses in the Temporal Segment Networks, each snippet in the sequence will produce its preliminary prediction of the action classes, and then a consensus among the snippets will be derived as the clip-level prediction. The spatial stream and the temporal stream of an action t are divided into K segments uniformly along the temporal dimension. A spatial snippet (i.e., an RGBimage) will be sampled randomly from each segment of the spatial stream. The spatial CNN takes the K images as input and produces S=S1,S2,...,SK as output, where Si=(s1,s2,...,sN) is an action classification score vector of the ith snippet. A temporal snippet, which is a stack of x-direction and y-direction optical flow field images, is sampled from each segment of the temporal stream. The temporal CNN takes the K image stacks as input and produces T=T1,T2,...,TK, where Tj=(t1,t2,...,tN) is an action classification), and wherein the oversampling in the feature space is performed (Fig. 1. pg. [05], Col. [02], para. [03]-XIAOCHUN discloses in the Temporal Segment Networks, each snippet in the sequence will produce its preliminary prediction of the action classes, and then a consensus among the snippets will be derived as the clip-level prediction. The spatial stream and the temporal stream of an action t are divided into K segments uniformly along the temporal dimension. A spatial snippet (i.e., an RGBimage) will be sampled randomly from each segment of the spatial stream. The spatial CNN takes the K images as input and produces S=S1,S2,...,SK as output, where Si=(s1,s2,...,sN) is an action classification score vector of the ith snippet. A temporal snippet, which is a stack of x-direction and y-direction optical flow field images, is sampled from each segment of the temporal stream. The temporal CNN takes the K image stacks as input and produces T=T1,T2,...,TK, where Tj=(t1,t2,...,tN) is an action classification). XIAOCHUN fail to explicitly teach learn transition probabilities of the movements of the first hidden Markov models by supervised learning in which the second supervised data is used; and wherein the oversampling in the feature space is performed by adding noise related to a speed of each body location of a person performing a movement in the first supervised data to a feature value of the movement for each body location. However, NAKAMURA explicitly teaches learn transition probabilities of the movements of the first hidden Markov models (Fig. 3. Pg. [03], Col. [02], Para. [02]-NAKAMURA discloses we propose GP-HSMM (Gaussian process–hidden semi-Markov model), a novel method to divide time series motion data into unit actions by using a stochastic model to estimate their lengths and classes. The proposed method involves a hidden semi-Markov model (HSMM) with a Gaussian process (GP) emission distribution, where each state represents a unit action. At Pg. [04], Col. [01], Para. [02]-NAKAMURA discloses we utilize Gaussian process regression, which learns emission xi of time step i in a segment. This makes it possible to represent each unit action as part of a continuous trajectory. If we obtain pairs (i, Xc) of emissions xi of time step i of segments belonging to the same class c, a predictive distribution whereby the emission of time step i becomes x follows a Gaussian distribution. At Pg. [04], Col. [02], Para. [02]-NAKAMURA discloses we use the blocked Gibbs sampler, which samples segments and their classes in an observed sequence. In the initialization phase, all observed sequences are first randomly divided into segments. Segments xnj(j = 1, 2, · · · , Jn) in observed sequence sn are then removed from the learning data, and parameter Xc of the Gaussian process and transition probability P(c|c′) of HSMM are updated) by supervised learning in which the second supervised data is used (Fig. 12. Pg. [08], Col. [02], Para. [02]-NAKAMURA discloses we then applied our proposed method to more complex motion capture data, which consisted of the basic motions of karate (called kata in Japanese) as shown in Figure 10 from a motion capture library. There are fixed motion patterns (punches or guards) in kata, and it is easy to form a ground truth for the segmentation. At pg. [07], Col. [02], para. [01]-NAKAMURA discloses we computed the normalized Hamming distance between the unsupervised segmentation and the ground truth (wherein c and ¯c represents sequences of estimated motion classes and true motion classes)); and wherein adding noise related to a speed of each body location of a person performing a movement in the first supervised data to a feature value of the movement for each body location (Fig. 3. Pg. [04], Col [01], para. [02]-NAKAMURA discloses we utilize Gaussian process regression, which learns emission xi of time step i in a segment. This makes it possible to represent each unit action as part of a continuous trajectory. If we obtain pairs (i, Xc) of emissions xi of time step i of segments belonging to the same class c, a predictive distribution whereby the emission of time step i becomes x follows a Gaussian distribution. is a hyperparameter that represents noise in the observation. In Equation (3), k is a vector containing the elements k(ip, i), and c is a scalar value k(i, i). Using the kernel function, GP can learn a time-series sequence that contains complex changes. Please also read Pg. [04], Col. [02]. Para. [01-02]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of XIAOCHUN of having an action segment estimation model building method, with the teachings of NAKAMURA of having learn transition probabilities of the movements of the first hidden Markov models by supervised learning in which the second supervised data is used. Wherein XIAOCHUN’s method having fixing the learnt observation probabilities, generating second supervised data by augmenting input first supervised data, and learning transition probabilities of the movements of the first hidden Markov models by supervised learning in which the second supervised data is used. The motivation behind the modification would have been to obtain a method that improves machine learning model training, accuracy and classifications as well as improved resolution, since both XIAOCHUN and NAKAMURA concern cellular image analysis. Wherein XIAOCHUN’s systems and methods provides a hierarchical hidden Markov model for capturing and understanding workers’ high-level activities in far-field surveillance videos that can be implemented for objective work sampling, personal physical fatigue, trade-level health risk assessment, and process-based quality control, while NAKAMURA’s systems and methods provide a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments and makes it possible to efficiently search for all possible segment lengths and classes in an unsupervised manner. Please see XIAOCHUN et al. (Xiaochun, Luo et al., “Capturing and Understanding Workers’ Activities in Far-Field Surveillance Videos with Deep Action Recognition and Bayesian Nonparametric Learning”, Computer-Aided Civil and Infrastructure Engineering, October 2018, pages 1-19, DOI:10.1111/mice.12419), Abstract and Pg. [16], Col. [01], Para. [01-02], and NAKAMURA et al. (NAKAMURA, Tomoaki et al., “Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes”, Front. Neurorobot, 20 December 2017, Volume 11 – 2017, https://doi.org/10.3389/fnbot.2017.00067), Abstract and Pg. [10], Col. [02], Para. [01-02]. Regarding claim 7, XIAOCHUN in view of NAKAMURA explicitly teaches the action segment estimation model building method of claim 6, XIAOCHUN further teaches wherein: at each clock-time, a feature value of a movement corresponding to a clock-time of a maximum parameter among the original parameter and parameters propagated from the before and after clock-times is selected as a feature value for each of the clock-times (Fig. 1. Pg. [06], Col. [01], Para. [02]-XIAOCHUN discloses Wang et al. (2016) proposed two strategies to aggregate the scores at the segment level to produce the score vector out of each stream: maximum and mean. The maximum strategy uses the score vector of the segment with the highest score element, whereas the mean strategy uses the mean score vector of all segments. Wang et al. (2016) proposed the weighted average strategy at the stream level to produce the final action score vector, which is the weighted average of the score vectors of the two streams. Please also read Pg. [05], Col. [02], Para. [03]). Although XIAOCHUN explicitly teaches the oversampling in the time direction is performed by propagating, for each clock-time, an original parameter randomly set at the clock-time to times before and after clock-times (Fig. 1. Pg. [05], Col. [01], Para. [03]-XIAOCHUN discloses there were two steps for preparing the observational data: extracting spatial and temporal streams and recognizing basic actions with the Temporal Segment Networks (wherein each worker of interest is tracked by a sequence of bounding boxes that are each represented by a frame number and pixel coordinates, action clips with a duration of 3 seconds are created from the frames, and spatial and temporal streams are created based on the action clips). At pg. [05], Col. [02], para. [03]-XIAOCHUN discloses in the Temporal Segment networks, each snippet in the sequence will produce its preliminary prediction of the action classes, and then a consensus among the snippets will be derived as the clip-level prediction. The spatial stream and the temporal stream of an action t are divided into K segments uniformly along the temporal dimension. A temporal snippet, which is a stack of x-direction and y-direction optical flow field images, is sampled from each segment of the temporal stream. The temporal CNN takes the K image stacks as input and produces T=T1,T2,...,TK, where Tj=(t1,t2,...,tN) is an action classification). Please also read Pg. [06], Col. [01], Para. [03]). XIAOCHUN fails to explicitly teach while attenuating the original parameter. However, NAKAMURA explicitly teaches while attenuating the original parameter (Fig. 5. Pg. [05], Col. [01], Para. [01]-NAKAMURA discloses after sampling xnj and cnj, parameter Xc of the Gaussian process and transition probability P(c|c′) of the hidden semi-Markov model are updated by adding them to the learning data. The segments and parameters of Gaussian processes are optimized alternately by iteratively performing Algorithm 1. Algorithm 1 shows the pseudocode of the blocked Gibbs sampler. At Pg. [06], Col. [01], Para. [01]-NAKAMURA discloses segment xj and its class are determined by backward sampling length k and class c of the segment, based on forward probabilities. At Pg. [06], Col. [02], Para. [01]-NAKAMURA discloses by iterating this procedure until t = 0, the observed sequence can be divided into segments and their classes can be determined. Please also read Pg. [06], Col. [01], Para. [01]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of XIAOCHUN in view of NAKAMURA of having an action segment estimation model building method, with the teachings of NAKAMURA of having while attenuating the original parameter. Wherein XIAOCHUN’s an action segment estimation model building method having the oversampling in the time direction is performed by propagating, for each clock-time, an original parameter randomly set at the clock-time to times before and after clock-times while attenuating the original parameter. The motivation behind the modification would have been to obtain an action segment estimation model building method that improves machine learning model training, accuracy and classifications as well as improved resolution, since both XIAOCHUN and NAKAMURA concern cellular image analysis. Wherein XIAOCHUN’s systems and methods provides a hierarchical hidden Markov model for capturing and understanding workers’ high-level activities in far-field surveillance videos that can be implemented for objective work sampling, personal physical fatigue, trade-level health risk assessment, and process-based quality control, while NAKAMURA’s systems and methods provide a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments and makes it possible to efficiently search for all possible segment lengths and classes in an unsupervised manner. Please see XIAOCHUN et al. (Xiaochun, Luo et al., “Capturing and Understanding Workers’ Activities in Far-Field Surveillance Videos with Deep Action Recognition and Bayesian Nonparametric Learning”, Computer-Aided Civil and Infrastructure Engineering, October 2018, pages 1-19, DOI:10.1111/mice.12419), Abstract and Pg. [16], Col. [01], Para. [01-02], and NAKAMURA et al. (NAKAMURA, Tomoaki et al., “Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes”, Front. Neurorobot, 20 December 2017, Volume 11 – 2017, https://doi.org/10.3389/fnbot.2017.00067), Abstract and Pg. [10], Col. [02], Para. [01-02]. Regarding claim 8, XIAOCHUN in view of NAKAMURA explicitly teaches the action segment estimation model building method of claim 7, XIAOCHUN fails to explicitly teach wherein the original parameter is attenuated so as to become zero after a predetermined number of clock-times distant. However, NAKAMURA explicitly teaches wherein the original parameter is attenuated so as to become zero after a predetermined number of clock-times distant (Fig. 6. Pg. [06], Col. [01], Para. [01]-NAKAMURA discloses we regard segments and their classes as latent variables that are sampled by forward filtering-backward sampling (Algorithm 2). Figure 6 depicts the computation of a three dimensional array [t][k][c]. The probability that two samples before time step t become a segment is computed; the resulting segment would be assigned to class two. Samples at t − 1 and t become a segment, and all the segments whose end point is t − 2 can potentially transit to this segment. [t][2][2] can be computed by marginalizing out these possibilities. At Pg. [06], Col. [02], Para. [01]-NAKAMURA discloses from t = T, length k1 and class c1 are determined according to k1, c1 ∼ [T][k][c], and sT−k1 : T becomes a segment whose class is c1. Then, length k2 and class c2 of the next segment are determined according to k2, c2 ∼ [T − k1][k][c]. By iterating this procedure until t = 0, the observed sequence can be divided into segments and their classes can be determined). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of XIAOCHUN in view of NAKAMURA of having an action segment estimation model building method, with the teachings of NAKAMURA of having wherein the original parameter is attenuated so as to become zero after a predetermined number of clock-times distant. Wherein XIAOCHUN’s method having wherein the original parameter is attenuated so as to become zero after a predetermined number of clock-times distant. The motivation behind the modification would have been to obtain a method that improves machine learning model training, accuracy and classifications as well as improved resolution, since both XIAOCHUN and NAKAMURA concern cellular image analysis. Wherein XIAOCHUN’s systems and methods provides a hierarchical hidden Markov model for capturing and understanding workers’ high-level activities in far-field surveillance videos that can be implemented for objective work sampling, personal physical fatigue, trade-level health risk assessment, and process-based quality control, while NAKAMURA’s systems and methods provide a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments and makes it possible to efficiently search for all possible segment lengths and classes in an unsupervised manner. Please see XIAOCHUN et al. (Xiaochun, Luo et al., “Capturing and Understanding Workers’ Activities in Far-Field Surveillance Videos with Deep Action Recognition and Bayesian Nonparametric Learning”, Computer-Aided Civil and Infrastructure Engineering, October 2018, pages 1-19, DOI:10.1111/mice.12419), Abstract and Pg. [16], Col. [01], Para. [01-02], and NAKAMURA et al. (NAKAMURA, Tomoaki et al., “Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes”, Front. Neurorobot, 20 December 2017, Volume 11 – 2017, https://doi.org/10.3389/fnbot.2017.00067), Abstract and Pg. [10], Col. [02], Para. [01-02]. Regarding claim 11, XIAOCHUN explicitly teaches a non-transitory recording medium storing a program that causes a computer to execute an action segment estimation model building processing, the processing (Fig. 1. Abstract-XIAOCHUN discloses Bayesian nonparametric hidden semi-Markov model was innovatively used to model and infer workers’ activities based on action sequences. Please also read Pg. [07], Col. [01], Para. [04]) comprising: in a hidden semi-Markov model including a plurality of second hidden Markov models each containing a plurality of first hidden Markov models using types of movement of a person as states (Fig. 1. Pg. [02], Col. [02], Para. [02]- XIAOCHUN discloses this study set out to develop a hierarchical statistical method for recognizing workers’ activities in far-fields surveillance videos. First, the temporal segment networks (TSNs) (Wang et al., 2015) were used to recognize workers’ actions, and a new fusion strategy was proposed to consider the characteristics of far-field surveillance videos. Second, the hierarchical Dirichlet process-hidden semi-Markov model (HDP-HSMM) (Johnson and Willsky, 2012) was employed to model and infer workers’ latent states), and the plurality of second hidden Markov models each using actions defined by combining a plurality of the movements as states, learning observation probabilities for each of the movement types of the plurality of first hidden Markov models using unsupervised learning (Fig. 1. Pg. [06], Col. [01], Para. [04]-XIAOCHUN discloses Figure 1 shows the probabilistic graphical model of the HDP-HSMM (Johnson and Willsky, 2013), which takes action sequences as input, models and clusters typical workers’ activities, and segments action sequences temporally. In Bayesian inference, workers’ activities are referred to as latent states, while basic actions are referred to as observations. The HDP-HSMM adopted in this study is with explicit duration semi-Markovianity, which means each state’s duration is given an explicit distribution. Further at Pg. [07], Col. [01], Para. [01]-XIAOCHUN discloses the observation sequence (ys) of state zs can be drawn from the observation distribution given parameters θzs. Please also read pg. [07], Col. [02], Para. [03]); Although XIAOCHUN explicitly teaches fixing the learnt observation probabilities, generating second supervised data by augmenting input first supervised data (Fig. 1. Pg. [05], Col. [01], Para. [03]-XIAOCHUN discloses there were two steps for preparing the observational data: extracting spatial and temporal streams and recognizing basic actions with the TSNs. At Pg. [05], Col. [01], Para. [03]-XIAOCHUN discloses to start the tracking process, each worker of interest was manually selected by putting a bounding box, which is the minimum rectangle enclosing the worker (wherein action clips were created from the frames of the tracking process and the temporal/spatial CNN (i.e. “TSN”) in Wang et al. (2016) was used for both training and testing to create and output snippets with preliminary predictions for action classes in the spatial and temporal directions). At Pg. [07], Col. [02], Para. [03]-XIAOCHUN discloses a total of 540 clips were manually selected to construct the training and test data sets for the TSNs. These action clips were manually categorized into seven action classes. Further at pg. 9, Col. [02], para. [02]-XIAOCHUN discloses the testing process used the same settings regarding augmentation with Wang et al. (2016)), and learning transition probabilities of the movements of the first hidden Markov models by supervised learning in which the second supervised data is used (Fig. 1. Pg. [06], Col. [01], Para. [04]-XIAOCHUN discloses the model employs an HDP (Teh et al., 2006) to define a global random probability measure. At pg. [06], Col. [01], Para. [04]-XIAOCHUN discloses the random measures are independently distributed according to the Dirichlet process and linked by being drawn from the same discrete measure β, thus E[πi]=β. πi can be interpreted as probability measures on the positive integers, which are the identifiers of the observations in the ith activity. Further at Pg. [06], Col. [02], Para. [01]-XIAOCHUN discloses each πi can be interpreted as the transition distribution from state i, namely the ith row of the transition matrix of the HSMM (wherein each state zs can be drawn from the transition distribution πzs−1, where zs−1 indexes the previous state and the observation sequence (ys) of state zs can be drawn from the observation distribution given parameters θz). Please see equations (5-10) and read pg. 10, Col. [01], para. [06-07]); and building the hidden semi-Markov model that is a model for estimating segments of the actions by using the learnt observation probabilities and the learnt transition probabilities (Fig. 1. Pg. [09], Col. [02], Para. [02]-XIAOCHUN discloses the architectures of the spatial and temporal CNNs for testing proposed in Wang et al. (2016) were used. The testing process used the same settings regarding augmentation with Wang et al. (2016). To implement augmentation, all RGB frames and optical flow images were reshaped to 340 ×256 pixels before feeding to the CNNs, and a sliding window of224 ×224 pixels was used on the reshaped images, generating ten augmented samples (wherein action clips were created from the frames of the tracking process and the temporal/spatial CNN (i.e. “TSN”) in Wang et al. (2016) was used for both training and testing to create and output snippets with preliminary predictions for action classes in the spatial and temporal directions). Please also read pg. 9, Col. [02], para. [02]), wherein, in the processing, augmentation is performed on the first supervised data by adding teacher information of the first supervised data to each item of data generated by at least one of oversampling in a time direction or oversampling in a feature space (Fig. 1. pg. [05], Col. [02], para. [03]-XIAOCHUN discloses in the TSNs, each snippet in the sequence will produce its preliminary prediction of the action classes, and then a consensus among the snippets will be derived as the clip-level prediction. The spatial stream and the temporal stream of an action t are divided into K segments uniformly along the temporal dimension. A spatial snippet (i.e., an RGBimage) will be sampled randomly from each segment of the spatial stream. The spatial CNN takes the K images as input and produces S=S1,S2,...,SK as output, where Si=(s1,s2,...,sN) is an action classification score vector of the ith snippet. A temporal snippet, which is a stack of x-direction and y-direction optical flow field images, is sampled from each segment of the temporal stream. The temporal CNN takes the K image stacks as input and produces T=T1,T2,...,TK, where Tj=(t1,t2,...,tN) is an action classification), and wherein the oversampling in the feature space is performed (Fig. 1. pg. [05], Col. [02], para. [03]-XIAOCHUN discloses in the Temporal Segment Networks, each snippet in the sequence will produce its preliminary prediction of the action classes, and then a consensus among the snippets will be derived as the clip-level prediction. The spatial stream and the temporal stream of an action t are divided into K segments uniformly along the temporal dimension. A spatial snippet (i.e., an RGBimage) will be sampled randomly from each segment of the spatial stream. The spatial CNN takes the K images as input and produces S=S1,S2,...,SK as output, where Si=(s1,s2,...,sN) is an action classification score vector of the ith snippet. A temporal snippet, which is a stack of x-direction and y-direction optical flow field images, is sampled from each segment of the temporal stream. The temporal CNN takes the K image stacks as input and produces T=T1,T2,...,TK, where Tj=(t1,t2,...,tN) is an action classification). XIAOCHUN fail to explicitly teach learning transition probabilities of the movements of the first hidden Markov models by supervised learning in which the second supervised data is used; and wherein the oversampling in the feature space is performed by adding noise related to a speed of each body location of a person performing a movement in the first supervised data to a feature value of the movement for each body location. However, NAKAMURA explicitly teaches learning transition probabilities of the movements of the first hidden Markov models (Fig. 3. Pg. [03], Col. [02], Para. [02]-NAKAMURA discloses we propose GP-HSMM (Gaussian process–hidden semi-Markov model), a novel method to divide time series motion data into unit actions by using a stochastic model to estimate their lengths and classes. The proposed method involves a hidden semi-Markov model (HSMM) with a Gaussian process (GP) emission distribution, where each state represents a unit action. At Pg. [04], Col. [01], Para. [02]-NAKAMURA discloses we utilize Gaussian process regression, which learns emission xi of time step i in a segment. This makes it possible to represent each unit action as part of a continuous trajectory. If we obtain pairs (i, Xc) of emissions xi of time step i of segments belonging to the same class c, a predictive distribution whereby the emission of time step i becomes x follows a Gaussian distribution. At Pg. [04], Col. [02], Para. [02]-NAKAMURA discloses we use the blocked Gibbs sampler, which samples segments and their classes in an observed sequence. In the initialization phase, all observed sequences are first randomly divided into segments. Segments xnj(j = 1, 2, · · · , Jn) in observed sequence sn are then removed from the learning data, and parameter Xc of the Gaussian process and transition probability P(c|c′) of HSMM are updated) by supervised learning in which the second supervised data is used (Fig. 12. Pg. [08], Col. [02], Para. [02]-NAKAMURA discloses we then applied our proposed method to more complex motion capture data, which consisted of the basic motions of karate (called kata in Japanese) as shown in Figure 10 from a motion capture library. There are fixed motion patterns (punches or guards) in kata, and it is easy to form a ground truth for the segmentation. At Pg. [07], Col. [02], Para. [01]-NAKAMURA discloses we computed the normalized Hamming distance between the unsupervised segmentation and the ground truth (wherein c and ¯c represents sequences of estimated motion classes and true motion classes)); and wherein adding noise related to a speed of each body location of a person performing a movement in the first supervised data to a feature value of the movement for each body location (Fig. 3. Pg. [04], Col [01], para. [02]-NAKAMURA discloses we utilize Gaussian process regression, which learns emission xi of time step i in a segment. This makes it possible to represent each unit action as part of a continuous trajectory. If we obtain pairs (i, Xc) of emissions xi of time step i of segments belonging to the same class c, a predictive distribution whereby the emission of time step i becomes x follows a Gaussian distribution. is a hyperparameter that represents noise in the observation. In Equation (3), k is a vector containing the elements k(ip, i), and c is a scalar value k(i, i). Using the kernel function, GP can learn a time-series sequence that contains complex changes. Please also read Pg. [04], Col. [02]. Para. [01-02]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of XIAOCHUN of having a non-transitory recording medium storing a program that causes a computer to execute an action segment estimation model building processing, with the teachings of NAKAMURA of having learning transition probabilities of the movements of the first hidden Markov models by supervised learning in which the second supervised data is used; and wherein the oversampling in the feature space is performed by adding noise related to a speed of each body location of a person performing a movement in the first supervised data to a feature value of the movement for each body location. Wherein XIAOCHUN’s non-transitory recording medium storing a program having fixing the learnt observation probabilities, generating second supervised data by augmenting input first supervised data, and learning transition probabilities of the movements of the first hidden Markov models by supervised learning in which the second supervised data is used; and building the hidden semi-Markov model that is a model for estimating segments of the actions by using the learnt observation probabilities and the learnt transition probabilities, wherein, in the processing, augmentation is performed on the first supervised data by adding teacher information of the first supervised data to each item of data generated by at least one of oversampling in a time direction or oversampling in a feature space, and wherein the oversampling in the feature space is performed by adding noise related to a speed of each body location of a person performing a movement in the first supervised data to a feature value of the movement for each body location. The motivation behind the modification would have been to obtain a non-transitory recording medium storing a program that improves machine learning model training, accuracy and classifications as well as improved resolution, since both XIAOCHUN and NAKAMURA concern image analysis and Markov models. Wherein XIAOCHUN’s systems and methods provides a hierarchical hidden Markov model for capturing and understanding workers’ high-level activities in far-field surveillance videos that can be implemented for objective work sampling, personal physical fatigue, trade-level health risk assessment, and process-based quality control, while NAKAMURA’s systems and methods provide a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments and makes it possible to efficiently search for all possible segment lengths and classes in an unsupervised manner. Please see XIAOCHUN et al. (Xiaochun, Luo et al., “Capturing and Understanding Workers’ Activities in Far-Field Surveillance Videos with Deep Action Recognition and Bayesian Nonparametric Learning”, Computer-Aided Civil and Infrastructure Engineering, October 2018, pages 1-19, DOI:10.1111/mice.12419), Abstract and Pg. [16], Col. [01], Para. [01-02], and NAKAMURA et al. (NAKAMURA, Tomoaki et al., “Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes”, Front. Neurorobot, 20 December 2017, Volume 11 – 2017, https://doi.org/10.3389/fnbot.2017.00067), Abstract and Pg. [10], Col. [02], Para. [01-02]. Regarding claim 12, XIAOCHUN in view of NAKAMURA explicitly teach the non-transitory recording medium of claim 11, XIAOCHUN further teaches wherein: at each clock-time, a feature value of a movement corresponding to a clock-time of a maximum parameter among the original parameter and parameters propagated from the before and after clock-times is selected as a feature value for each of the clock-times (Fig. 1. Pg. [06], Col. [01], Para. [02]-XIAOCHUN discloses Wang et al. (2016) proposed two strategies to aggregate the scores at the segment level to produce the score vector out of each stream: maximum and mean. The maximum strategy uses the score vector of the segment with the highest score element, whereas the mean strategy uses the mean score vector of all segments. Wang et al. (2016) proposed the weighted average strategy at the stream level to produce the final action score vector, which is the weighted average of the score vectors of the two streams. Please also read Pg. [05], Col. [02], Para. [03]). Although XIAOCHUN explicitly teaches the oversampling in the time direction is performed by propagating, for each clock-time, an original parameter randomly set at the clock-time to times before and after clock-times (Fig. 1. Pg. [05], Col. [01], Para. [03]-XIAOCHUN discloses there were two steps for preparing the observational data: extracting spatial and temporal streams and recognizing basic actions with the Temporal Segment Networks (wherein each worker of interest is tracked by a sequence of bounding boxes that are each represented by a frame number and pixel coordinates, action clips with a duration of 3 seconds are created from the frames, and spatial and temporal streams are created based on the action clips). At pg. [05], Col. [02], para. [03]-XIAOCHUN discloses in the Temporal Segment networks, each snippet in the sequence will produce its preliminary prediction of the action classes, and then a consensus among the snippets will be derived as the clip-level prediction. The spatial stream and the temporal stream of an action t are divided into K segments uniformly along the temporal dimension. A temporal snippet, which is a stack of x-direction and y-direction optical flow field images, is sampled from each segment of the temporal stream. The temporal CNN takes the K image stacks as input and produces T=T1,T2,...,TK, where Tj=(t1,t2,...,tN) is an action classification). Please also read Pg. [06], Col. [01], Para. [03]). XIAOCHUN fails to explicitly teach while attenuating the original parameter. However, NAKAMURA explicitly teaches while attenuating the original parameter (Fig. 5. Pg. [05], Col. [01], Para. [01]-NAKAMURA discloses after sampling xnj and cnj, parameter Xc of the Gaussian process and transition probability P(c|c′) of the hidden semi-Markov model are updated by adding them to the learning data. The segments and parameters of Gaussian processes are optimized alternately by iteratively performing Algorithm 1. Algorithm 1 shows the pseudocode of the blocked Gibbs sampler. At Pg. [06], Col. [01], Para. [01]-NAKAMURA discloses segment xj and its class are determined by backward sampling length k and class c of the segment, based on forward probabilities. At Pg. [06], Col. [02], Para. [01]-NAKAMURA discloses by iterating this procedure until t = 0, the observed sequence can be divided into segments and their classes can be determined. Please also read Pg. [06], Col. [01], Para. [01]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of XIAOCHUN in view of NAKAMURA of having a non-transitory recording medium storing a program that causes a computer to execute an action segment estimation model building processing, with the teachings of NAKAMURA of having while attenuating the original parameter. Wherein XIAOCHUN’s non-transitory recording medium storing a program having the oversampling in the time direction is performed by propagating, for each clock-time, an original parameter randomly set at the clock-time to times before and after clock-times while attenuating the original parameter. The motivation behind the modification would have been to obtain a non-transitory recording medium storing a program that improves machine learning model training, accuracy and classifications as well as improved resolution, since both XIAOCHUN and NAKAMURA concern cellular image analysis. Wherein XIAOCHUN’s systems and methods provides a hierarchical hidden Markov model for capturing and understanding workers’ high-level activities in far-field surveillance videos that can be implemented for objective work sampling, personal physical fatigue, trade-level health risk assessment, and process-based quality control, while NAKAMURA’s systems and methods provide a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments and makes it possible to efficiently search for all possible segment lengths and classes in an unsupervised manner. Please see XIAOCHUN et al. (Xiaochun, Luo et al., “Capturing and Understanding Workers’ Activities in Far-Field Surveillance Videos with Deep Action Recognition and Bayesian Nonparametric Learning”, Computer-Aided Civil and Infrastructure Engineering, October 2018, pages 1-19, DOI:10.1111/mice.12419), Abstract and Pg. [16], Col. [01], Para. [01-02], and NAKAMURA et al. (NAKAMURA, Tomoaki et al., “Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes”, Front. Neurorobot, 20 December 2017, Volume 11 – 2017, https://doi.org/10.3389/fnbot.2017.00067), Abstract and Pg. [10], Col. [02], Para. [01-02]. Regarding claim 13, XIAOCHUN in view of NAKAMURA explicitly teach the non-transitory recording medium of claim 12, XIAOCHUN fails to explicitly teach wherein the original parameter is attenuated so as to become zero after a predetermined number of clock-times distant. However, NAKAMURA explicitly teaches wherein the original parameter is attenuated so as to become zero after a predetermined number of clock-times distant (Fig. 6. Pg. [06], Col. [01], Para. [01]-NAKAMURA discloses we regard segments and their classes as latent variables that are sampled by forward filtering-backward sampling (Algorithm 2). Figure 6 depicts the computation of a three dimensional array [t][k][c]. The probability that two samples before time step t become a segment is computed; the resulting segment would be assigned to class two. Samples at t − 1 and t become a segment, and all the segments whose end point is t − 2 can potentially transit to this segment. [t][2][2] can be computed by marginalizing out these possibilities. At Pg. [06], Col. [02], Para. [01]-NAKAMURA discloses from t = T, length k1 and class c1 are determined according to k1, c1 ∼ [T][k][c], and sT−k1 : T becomes a segment whose class is c1. Then, length k2 and class c2 of the next segment are determined according to k2, c2 ∼ [T − k1][k][c]. By iterating this procedure until t = 0, the observed sequence can be divided into segments and their classes can be determined). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of XIAOCHUN in view of NAKAMURA of having a non-transitory recording medium storing a program that causes a computer to execute an action segment estimation model building processing, with the teachings of NAKAMURA of having wherein the original parameter is attenuated so as to become zero after a predetermined number of clock-times distant. Wherein XIAOCHUN’s non-transitory recording medium storing a program having wherein the original parameter is attenuated so as to become zero after a predetermined number of clock-times distant. The motivation behind the modification would have been to obtain a non-transitory recording medium storing a program that improves machine learning model training, accuracy and classifications as well as improved resolution, since both XIAOCHUN and NAKAMURA concern cellular image analysis. Wherein XIAOCHUN’s systems and methods provides a hierarchical hidden Markov model for capturing and understanding workers’ high-level activities in far-field surveillance videos that can be implemented for objective work sampling, personal physical fatigue, trade-level health risk assessment, and process-based quality control, while NAKAMURA’s systems and methods provide a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments and makes it possible to efficiently search for all possible segment lengths and classes in an unsupervised manner. Please see XIAOCHUN et al. (Xiaochun, Luo et al., “Capturing and Understanding Workers’ Activities in Far-Field Surveillance Videos with Deep Action Recognition and Bayesian Nonparametric Learning”, Computer-Aided Civil and Infrastructure Engineering, October 2018, pages 1-19, DOI:10.1111/mice.12419), Abstract and Pg. [16], Col. [01], Para. [01-02], and NAKAMURA et al. (NAKAMURA, Tomoaki et al., “Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes”, Front. Neurorobot, 20 December 2017, Volume 11 – 2017, https://doi.org/10.3389/fnbot.2017.00067), Abstract and Pg. [10], Col. [02], Para. [01-02]. Claims 5, 10 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over XIAOCHUN et al. (Xiaochun, Luo et al., “Capturing and Understanding Workers’ Activities in Far-Field Surveillance Videos with Deep Action Recognition and Bayesian Nonparametric Learning”, Computer-Aided Civil and Infrastructure Engineering, October 2018, pages 1-19, DOI:10.1111/mice.12419), hereinafter referenced as XIAOCHUN in view of NAKAMURA et al. (NAKAMURA, Tomoaki et al., “Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes”, Front. Neurorobot, 20 December 2017, Volume 11 – 2017, https://doi.org/10.3389/fnbot.2017.00067), hereinafter referenced as NAKAMURA and in further view of ASSOULINE et al. (US 20220125337 A1), hereinafter referenced as ASSOULINE. Regarding claim 5, XIAOCHUN in view of NAKAMURA explicitly teach the action segment estimation model building device of claim 1, XIAOCHUN in view of NAKAMURA fails to explicitly teach wherein the speed of each of the body location is an angular speed of the corresponding body location, and a magnitude of noise related to the angular speed of each of the body locations increases as the angular speed increases. However, ASSOULINE explicitly teaches wherein the speed of each of the body location is an angular speed of the corresponding body location (Fig. 5. Col. [15], Line [21-24]-ASSOULINE discloses FIG. 5A is a block diagram showing an example body tracking system 126, according to example embodiments. Body tracking system 126 operates on a set of input data (e.g., a video 501 depicting a real-world body of a user). Body tracking system 126 includes a machine learning technique module 512, a skeletal joint position module 514, a smoothing module 516, and a virtual object display module 520 (wherein tracking system includes acceleration, gravitation and rotation sensors)), and a magnitude of noise related to the angular speed of each of the body locations increases as the angular speed increases (Fig. 5. Col. [16], Line [07-12]-ASSOULINE discloses the skeletal joint position module 514 tracks the joints that are part of one set separately from the joints that are part of another set to measure noise across the set of video frames. At Col. [16], Line [27-35]-ASSOULINE discloses the machine learning technique may be trained based on a set of training videos to predict that a first set of joints (e.g., the neck joint, left shoulder joint, and left elbow joint) move more or results in a greater amount of noise across a set of frames than another set of joints (e.g., the hip joint and the left leg joint. At Col. [17], Line [28-34]-ASSOULINE discloses the smoothing module 516 accesses a set of previous frames (e.g., 1-2 seconds of past video). The smoothing module 516 analyzes movement of the skeletal joints or sets of the skeletal joints across the set of previous frames. The smoothing module 516 applies a plurality of smoothing filters to the first set of skeletal joints that appear in the previous frames, such as after or before measuring a signal quality parameter representing an amount of noise in movement of the skeletal joints). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of XIAOCHUN in view of NAKAMURA of having an action segment estimation model building device, with the teachings of ASSOULINE of having wherein the speed of each of the body location is an angular speed of the corresponding body location, and a magnitude of noise related to the angular speed of each of the body locations increases as the angular speed increases. Wherein XIAOCHUN’s action segment estimation model building device having wherein the speed of each of the body location is an angular speed of the corresponding body location, and a magnitude of noise related to the angular speed of each of the body locations increases as the angular speed increases. The motivation behind the modification would have been to obtain an action segment estimation model building device that improves the capture of human body movements, since both XIAOCHUN and ASSOULINE concern image analysis in the context of human action and movement. Wherein XIAOCHUN’s systems and methods provides a hierarchical hidden Markov model for capturing and understanding workers’ high-level activities in far-field surveillance videos that can be implemented for objective work sampling, personal physical fatigue, trade-level health risk assessment, and process-based quality control, while ASSOULINE’s systems and methods improve the efficiency of electronic devices and the overall responsiveness of filters for detecting and smoothing noise in skeletal movement. Please see XIAOCHUN et al. (Xiaochun, Luo et al., “Capturing and Understanding Workers’ Activities in Far-Field Surveillance Videos with Deep Action Recognition and Bayesian Nonparametric Learning”, Computer-Aided Civil and Infrastructure Engineering, October 2018, pages 1-19, DOI:10.1111/mice.12419), Abstract and Pg. [16], Col. [01], Para. [01-02], and ASSOULINE et al. (US 11660022 B2), Abstract. Regarding claim 10, XIAOCHUN in view of NAKAMURA explicitly teach the action segment estimation model building method of claim 6, XIAOCHUN in view of NAKAMURA fails to explicitly teach wherein the speed of each of the body location is an angular speed of the corresponding body location, and a magnitude of noise related to the angular speed of each of the body locations increases as the angular speed increases. However, ASSOULINE explicitly teaches wherein the speed of each of the body location is an angular speed of the corresponding body location (Fig. 5. Col. [15], Line [21-24]-ASSOULINE discloses FIG. 5A is a block diagram showing an example body tracking system 126, according to example embodiments. Body tracking system 126 operates on a set of input data (e.g., a video 501 depicting a real-world body of a user). Body tracking system 126 includes a machine learning technique module 512, a skeletal joint position module 514, a smoothing module 516, and a virtual object display module 520 (wherein tracking system includes acceleration, gravitation and rotation sensors)), and a magnitude of noise related to the angular speed of each of the body locations increases as the angular speed increases (Fig. 5. Col. [16], Line [07-12]-ASSOULINE discloses the skeletal joint position module 514 tracks the joints that are part of one set separately from the joints that are part of another set to measure noise across the set of video frames. At Col. [16], Line [27-35]-ASSOULINE discloses the machine learning technique may be trained based on a set of training videos to predict that a first set of joints (e.g., the neck joint, left shoulder joint, and left elbow joint) move more or results in a greater amount of noise across a set of frames than another set of joints (e.g., the hip joint and the left leg joint. At Col. [17], Line [28-34]-ASSOULINE discloses the smoothing module 516 accesses a set of previous frames (e.g., 1-2 seconds of past video). The smoothing module 516 analyzes movement of the skeletal joints or sets of the skeletal joints across the set of previous frames. The smoothing module 516 applies a plurality of smoothing filters to the first set of skeletal joints that appear in the previous frames, such as after or before measuring a signal quality parameter representing an amount of noise in movement of the skeletal joints). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of XIAOCHUN in view of NAKAMURA of having an action segment estimation model building method, with the teachings of ASSOULINE of having wherein the speed of each of the body location is an angular speed of the corresponding body location, and a magnitude of noise related to the angular speed of each of the body locations increases as the angular speed increases. Wherein XIAOCHUN’s an action segment estimation model building method having wherein the speed of each of the body location is an angular speed of the corresponding body location, and a magnitude of noise related to the angular speed of each of the body locations increases as the angular speed increases. The motivation behind the modification would have been to obtain an action segment estimation model building method that improves the capture of human body movements, since both XIAOCHUN and ASSOULINE concern image analysis in the context of human action and movement. Wherein XIAOCHUN’s systems and methods provides a hierarchical hidden Markov model for capturing and understanding workers’ high-level activities in far-field surveillance videos that can be implemented for objective work sampling, personal physical fatigue, trade-level health risk assessment, and process-based quality control, while ASSOULINE’s systems and methods improve the efficiency of electronic devices and the overall responsiveness of filters for detecting and smoothing noise in skeletal movement. Please see XIAOCHUN et al. (Xiaochun, Luo et al., “Capturing and Understanding Workers’ Activities in Far-Field Surveillance Videos with Deep Action Recognition and Bayesian Nonparametric Learning”, Computer-Aided Civil and Infrastructure Engineering, October 2018, pages 1-19, DOI:10.1111/mice.12419), Abstract and Pg. [16], Col. [01], Para. [01-02], and ASSOULINE et al. (US 11660022 B2), Abstract. Regarding claim 15, XIAOCHUN in view of NAKAMURA explicitly teach the non-transitory recording medium of claim 12, XIAOCHUN in view of NAKAMURA fails to explicitly teach wherein the speed of each of the body location is an angular speed of the corresponding body location, and a magnitude of noise related to the angular speed of each of the body locations increases as the angular speed increases However, ASSOULINE explicitly teaches wherein the speed of each of the body location is an angular speed of the corresponding body location (Fig. 5. Col. [15], Line [21-24]-ASSOULINE discloses FIG. 5A is a block diagram showing an example body tracking system 126, according to example embodiments. Body tracking system 126 operates on a set of input data (e.g., a video 501 depicting a real-world body of a user). Body tracking system 126 includes a machine learning technique module 512, a skeletal joint position module 514, a smoothing module 516, and a virtual object display module 520 (wherein tracking system includes acceleration, gravitation and rotation sensors)), and a magnitude of noise related to the angular speed of each of the body locations increases as the angular speed increases (Fig. 5. Col. [16], Line [07-12]-ASSOULINE discloses the skeletal joint position module 514 tracks the joints that are part of one set separately from the joints that are part of another set to measure noise across the set of video frames. At Col. [16], Line [27-35]-ASSOULINE discloses the machine learning technique may be trained based on a set of training videos to predict that a first set of joints (e.g., the neck joint, left shoulder joint, and left elbow joint) move more or results in a greater amount of noise across a set of frames than another set of joints (e.g., the hip joint and the left leg joint. At Col. [17], Line [28-34]-ASSOULINE discloses the smoothing module 516 accesses a set of previous frames (e.g., 1-2 seconds of past video). The smoothing module 516 analyzes movement of the skeletal joints or sets of the skeletal joints across the set of previous frames. The smoothing module 516 applies a plurality of smoothing filters to the first set of skeletal joints that appear in the previous frames, such as after or before measuring a signal quality parameter representing an amount of noise in movement of the skeletal joints). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of XIAOCHUN in view of NAKAMURA of having a non-transitory recording medium storing a program that causes a computer to execute an action segment estimation model building processing, with the teachings of ASSOULINE of having wherein the speed of each of the body location is an angular speed of the corresponding body location, and a magnitude of noise related to the angular speed of each of the body locations increases as the angular speed increases. Wherein XIAOCHUN’s non-transitory recording medium storing a program having wherein the speed of each of the body location is an angular speed of the corresponding body location, and a magnitude of noise related to the angular speed of each of the body locations increases as the angular speed increases. The motivation behind the modification would have been to obtain a non-transitory recording medium storing a program that improves the capture of human body movements, since both XIAOCHUN and ASSOULINE concern image analysis in the context of human action and movement. Wherein XIAOCHUN’s systems and methods provides a hierarchical hidden Markov model for capturing and understanding workers’ high-level activities in far-field surveillance videos that can be implemented for objective work sampling, personal physical fatigue, trade-level health risk assessment, and process-based quality control, while ASSOULINE’s systems and methods improve the efficiency of electronic devices and the overall responsiveness of filters for detecting and smoothing noise in skeletal movement. Please see XIAOCHUN et al. (Xiaochun, Luo et al., “Capturing and Understanding Workers’ Activities in Far-Field Surveillance Videos with Deep Action Recognition and Bayesian Nonparametric Learning”, Computer-Aided Civil and Infrastructure Engineering, October 2018, pages 1-19, DOI:10.1111/mice.12419), Abstract and Pg. [16], Col. [01], Para. [01-02], and ASSOULINE et al. (US 11660022 B2), Abstract. Conclusion Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant`s disclosure. Fitzgibbon et al. (US 20110228976 A1)- Synthesized body images are generated for a machine learning algorithm of a body joint tracking system. Frames from motion capture sequences are retargeted to several different body types, to leverage the motion capture sequences. To avoid providing redundant or similar frames to the machine learning algorithm, and to provide a compact yet highly variegated set of images, dissimilar frames can be identified using a similarity metric. The similarity metric is used to locate frames which are sufficiently distinct, according to a threshold distance. For realism, noise is added to the depth images based on noise sources which a real world depth camera would often experience. Other random variations can be introduced as well. For example, a degree of randomness can be added to retargeting. For each frame, the depth image and a corresponding classification image, with labeled body parts, are provided. 3-D scene elements can also be provided..…...................... Please see Fig. 4-7. Abstract. LEA et al. (C. Lea, M. D. Flynn, R. Vidal, A. Reiter and G. D. Hager, "Temporal Convolutional Networks for Action Segmentation and Detection," 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 1003-1012, doi: 10.1109/CVPR.2017.113)- The ability to identify and temporally segment fine-grained human actions throughout a video is crucial for robotics, surveillance, education, and beyond. Typical approaches decouple this problem by first extracting local spatiotemporal features from video frames and then feeding them into a temporal classifier that captures high-level temporal patterns. We describe a class of temporal models, which we call Temporal Convolutional Networks (TCNs), that use a hierarchy of temporal convolutions to perform fine-grained action segmentation or detection. Our Encoder-Decoder TCN uses pooling and upsampling to efficiently capture long-range temporal patterns whereas our Dilated TCN uses dilated convolutions. We show that TCNs are capable of capturing action compositions, segment durations, and long-range dependencies, and are over a magnitude faster to train than competing LSTM-based Recurrent Neural Networks. We apply these models to three challenging fine-grained datasets and show large improvements over the state of the art...…...................... Please see Fig. 1. Abstract. WANG et al. (US 20130132316 A1)- Embodiments of the present invention include systems and methods for improved state space modeling (SSM) comprising two added layers to model the substructure transition dynamics and action duration distribution. In embodiments, the first layer represents a substructure transition model that encodes the sparse and global temporal transition probability. In embodiments, the second layer models the action boundary characteristics by injecting discriminative information into a logistic duration model such that transition boundaries between successive actions can be located more accurately; thus, the second layer exploits discriminative information to discover action boundaries adaptively.…...................... Please see Fig. 5-7. Abstract. ETEMAD et al. (US 20150272483 A1)- Systems, methods and devices that facilitate determination of enhanced exercise or physical activity metrics by considering multiple types of data. Metrics are computable by pre-processing, and in some cases segmenting, a variety of input signals, such as acceleration signals, electromyography signals or other signals from a wearable device…...................... Please see Fig. 2, 8-10. Abstract. Bajcsy et al. (US 20100176952 A1)- An approach for determining motions of a body using distributed sensors is disclosed. In one embodiment, an apparatus can include: a plurality of sensors coupled to a body, where each sensor is positioned at about a designated location on the body, and where each sensor is configured to acquire motion data related to movement of the designated location on the body and at which the sensor is positioned, and to reduce the motion data into compressed and transmittable motion data; and a base station configured to receive the compressed motion data via wireless communication from at least one of the plurality of sensors, the base station being further configured to remove outlier information from the received motion data, and to match the received motion data to a predetermined action, where the predetermined action indicates a movement of the body........................ Please see Fig. 1-4. Abstract. DATTA et al. (US 20210110550 A1)- Systems and methods are disclosed to objectively identify sub-second behavioral modules in the three-dimensional (3D) video data that represents the motion of a subject. Defining behavioral modules based upon structure in the 3D video data itself—rather than using a priori definitions for what should constitute a measurable unit of action—identifies a previously-unexplored sub-second regularity that defines a timescale upon which behavior is organized, yields important information about the components and structure of behavior, offers insight into the nature of behavioral change in the subject, and enables objective discovery of subtle alterations in patterned action. The systems and methods of the invention can be applied to drug or gene therapy classification, drug or gene therapy screening, disease study including early detection of the onset of a disease, toxicology research, side-effect study, learning and memory process study, anxiety study, and analysis in consumer behavior....................... Please see Fig. 2, 4, and 8. Abstract. BUNEO (Y. SHI, C.A. BUNEO, "Movement variability resulting from different noise sources: A simulation study", Human Movement Science 31 (2012), https://doi.org/10.1016/j.humov.2011.07.003)- Limb movements are highly variable due in part to noise occurring at different stages of movement production, from sensing the position of the limb to the issuing of motor commands. Here we used a simulation approach to predict the effects of noise associated with (1) sensing the position of the limb (‘position sensing noise’) and (2) planning an appropriate movement vector (‘trajectory planning noise’), as well as the combined effects of these factors, on arm movement variability across the workspace. Results were compared to those predicted by a previous model of the noise associated with movement execution. We found that the effects of sensing and planning related noise on movement variability were highly dependent upon both the planned movement direction and the initial configuration of the arm and differed in several respects from the effects of execution noise. In addition, sensing and planning noise interacted in a complex manner across movement directions. These results provide important insights into the relative roles of sensing, planning and execution noise in movement variability that could prove useful for understanding and addressing the exaggerated variability that arises from neurological damage, and for interpreting neurophysiological investigations that seek to relate neural variability to behavioral variability..…...................... Please see Fig. 1-3. Abstract. ROLLE et al. (US 20180181884 A1)- Signal Phase and Timing (SPaT) messages are provided to control operation of a vehicle. A computer system receives switching state data (SD1) from one or more traffic lights and provides a SPaT message to the vehicle. The SD1 of a traffic light includes a pass-state (SD1p) and a stop-state (SD1s) data at respective sampling time points. A signal analyzer in the computer system analyzes the SD1 by: identifying the current signal state (SD1s, SD1p) of the one or more traffic lights; deriving, from a statistical model, probabilities for future state transitions for one or more future prediction intervals; and determining a minimum end time for a state transition from a current state to the different state. A message composer composes the SPaT message including the determined minimum end time........................ Please see Fig. 1-2 and 4-5. And para. [0018-0024, 0060, 0065, 0068], Abstract. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Aaron Bonansinga whose telephone number is (703) 756-5380 The examiner can normally be reached on Monday-Friday, 9:00 a.m. - 6:00 p.m. ET. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached by phone at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AARON TIMOTHY BONANSINGA/Examiner, Art Unit 2673 /CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673
Read full office action

Prosecution Timeline

Jun 26, 2023
Application Filed
Oct 16, 2025
Non-Final Rejection — §103, §112
Dec 16, 2025
Response Filed
Feb 21, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12555249
METHOD, SYSTEM, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM FOR SUPPORTING VIRTUAL GOLF SIMULATION
2y 5m to grant Granted Feb 17, 2026
Patent 12548171
INFORMATION PROCESSING APPARATUS, METHOD AND MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12541822
METHOD AND APPARATUS OF PROCESSING IMAGE, COMPUTING DEVICE, AND MEDIUM
2y 5m to grant Granted Feb 03, 2026
Patent 12505503
IMAGE ENHANCEMENT
2y 5m to grant Granted Dec 23, 2025
Patent 12482106
METHOD AND ELECTRONIC DEVICE FOR SEGMENTING OBJECTS IN SCENE
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+33.3%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 25 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month