Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-17 are presented for examination.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on June 14, 2023 and July 27, 2023 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
Claim 15 is objected to because of the following informalities:
A superfluous space exists between “model” and the comma in the second limitation.
The phrase “a predicting unit, configured to input the path representation into a prediction model” is semantically unclear because it associates a structural element (“unit”) with actions (‘operations’). Applicant may rephrase the limitation as “providing a predicting unit configured to…”, or remove the term “predicting unit” and recite “inputting the path representation” instead to obviate the interpretation of the claims under 35 U.S.C. § 112(f)).
Claim 16 is objected to for dependency on claim 15.
Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“a predicting unit” in claim 15.
Regarding the invocation of 112f, see rejections under 35 U.S.C. 112(a)-(b) infra.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 15-16 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
The claim limitation “prediction unit” invokes 35 U.S.C 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The scope of this functional limitation, therefore, encompasses any and all software that performs the claimed functions, without limitation to a specific disclosed structure or algorithm.
Therefore, it is unclear whether Applicant had possession of the claimed inventions as of the effective
filing date. See rejection under 35 USC 112(b) infra for further analysis.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 15-16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim limitation “prediction unit” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. At most, the function of the prediction unit is described in paragraph 72, which describes that the unit is “configured to” perform the task without providing any algorithm (e.g., mathematical formula, sequence of steps) for how the prediction model processes the input to generate the output. The scope of this functional limitation, therefore, encompasses any and all software that performs the claimed functions, without limitation to a specific disclosed structure or algorithm. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1
Step 1: The claim recites a method; therefore, it is directed to the statutory category of
processes.
Step2A Prong 1: The claim recites, inter alia:
[A]cquiring at least one trajectory point of at least one user, wherein each trajectory point of each user comprises a place passed by the each user, a start time and a duration: This limitation encompasses a mental process of obtaining trajectory data points.
[O]btaining, for each user, a position of each trajectory point from the trajectory representation of the user by searching according to the start time and the duration of each trajectory point of the each user: This limitation encompasses a mental process of obtaining trajectory data point representations of the position of the user, which can be mentally performed.
[A]djusting a network parameter… according to a difference between the place passed by the each user and the position of the each trajectory point obtained by searching: This limitation is seen as a mental process since it deals with adjusting a parameter and not changing the model itself.
Step2A Prong 2: This judicial exception is not integrated into a practical application because the
additional elements are as follows:
[I]nputting the at least one trajectory point of the at least one user into a pre-trained model to obtain a trajectory representation of each user: Data Gather- Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity (MPEP 2106.05(g)).
[T]raining a path representation model, comprising…: Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
…of the pre-trained model… to obtain the path representation model: Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows:
[I]nputting the at least one trajectory point of the at least one user into a pre-trained model to obtain a trajectory representation of each user: The additional element of “receiving” does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of receiving steps amounts to no more than mere data gathering. This element amounts to receiving data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II (i). This cannot provide an inventive concept.
[T]raining a path representation model, comprising…: Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and cannot provide inventive concept (MPEP 2106.05(f)).
…of the pre-trained model… to obtain the path representation model: Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and cannot provide inventive concept (MPEP 2106.05(f)).
The elements in combination as an ordered whole still do not amount to significantly more than the
judicial exception (i.e., the abstract ideas of obtaining trajectory data and mathematical concepts for calculating parameter differences). The claim merely describes a process of applying known data processing techniques (inputting trajectory points and searching by the start time and duration) and standard computing functions (training a model by adjusting parameters based on the calculated difference).
Therefore, the claim as a whole remains focused on the abstract idea and fails Step 2B of the
eligibility analysis.
Claim 2
Step 1: A process, as above.
Step2A Prong 1: The claim recites, inter alia:
[A]cquiring a sample set, a sample in the sample set comprising a sample trajectory and a tag: This limitation is viewed as a mental process of acquiring data samples containing a trajectory and a tag.
Step2A Prong 2: This judicial exception is not integrated into a practical application because the
additional elements are as follows:
[A]nd using respectively the sample trajectory and the tag in the sample set as an input and an expected output of the path representation model…: Data Gather- Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity (MPEP 2106.05(g)).
to perform supervised training on the path representation model: Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows:
[A]nd using respectively the sample trajectory and the tag in the sample set as an input and an expected output of the path representation model…: The additional element of “receiving” does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of receiving steps amounts to no more than mere data gathering. This element amounts to receiving data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II (i). This cannot provide an inventive concept.
to perform supervised training on the path representation model: Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and cannot provide inventive concept (MPEP 2106.05(f)).
Even when considered in combination, these additional elements represent mere instructions to apply
an exception and therefore do not provide an inventive concept. The claim is ineligible.
Claim 3
Step 1: A process, as above.
Step2A Prong 1: The claim recites, inter alia:
[D]ividing, for a target sample trajectory with a total duration exceeding a predetermined value in the sample set, the target sample trajectory into at least one segment according to a predetermined time interval: This limitation is viewed as a mental process as it involves partitioning a dataset based on determining whether certain values exceed different thresholds.
[A]nd constructing, for each target sample trajectory, the representation of each segment into a sequence of representations of the target sample trajectory: This limitation is seen as a mental process as it involves organizing the representation of each segment into a sequence of representations.
Step2A Prong 2: This judicial exception is not integrated into a practical application because the
additional elements are as follows:
[I]nputting, for each target sample trajectory, at least one segment of the target sample trajectory into…: Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity (MPEP 2106.05(g)).
…the path representation model to obtain a representation of each segment of the target sample trajectory: Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)).
…and inputting the sequence and a time identifier corresponding to each segment into a sequence model…: Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity (MPEP 2106.05(g)).
…to output a sequence representation of the target sample trajectory: Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)).
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows:
[I]nputting, for each target sample trajectory, at least one segment of the target sample trajectory into…: The additional element of “receiving” does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of receiving steps amounts to no more than mere data gathering. This element amounts to receiving data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II (i). This cannot provide an inventive concept.
…the path representation model to obtain a representation of each segment of the target sample trajectory: Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). This falls under Well-Understood, Routine, Conventional activity -see MPEP 2106.05(d)(II)(vi).
…and inputting the sequence and a time identifier corresponding to each segment into a sequence model…: The additional element of “receiving” does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of receiving steps amounts to no more than mere data gathering. This element amounts to receiving data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II (i). This cannot provide an inventive concept.
…to output a sequence representation of the target sample trajectory: Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). This falls under Well-Understood, Routine, Conventional activity -see MPEP 2106.05(d)(II)(vi).
Even when considered in combination, these additional elements represent mere instructions to apply an exception and therefore do not provide an inventive concept. The claim is ineligible.
Claim 4
Step 1: A process, as above.
Step2A Prong 1: The claim recites, inter alia:
and adjusting a network parameter…: This limitation is seen as a mental process since it deals with adjusting a parameter and not changing the model itself.
…of the sequence model according to a difference between the prediction result of the each target sample trajectory and a tag corresponding to the each target sample trajectory: This limitation is viewed as a mathematical concept as it deals with calculating the loss value which calculates the difference between two numbers.
Step2A Prong 2: This judicial exception is not integrated into a practical application because the
additional elements are as follows:
[O]utputting the sequence representation of each target sample trajectory by a prediction model, to obtain a prediction result of the each target sample trajectory: Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)).
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows:
[O]utputting the sequence representation of each target sample trajectory by a prediction model, to obtain a prediction result of the each target sample trajectory: Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). This falls under Well-Understood, Routine, Conventional activity -see MPEP 2106.05(d)(II)(vi).
Even when considered in combination, these additional elements represent mere instructions to apply an exception and therefore do not provide an inventive concept. The claim is ineligible.
Claim 5
Step 1: A process, as above.
Step2A Prong 1: The claim recites, inter alia:
the tag comprises at least one of: a path category tag, an abnormal event tag, a next position tag, or a schedule tag: This limitation is viewed as a mental process because it involves the mental classification of data into specific categories (e.g., path, event, schedule).
Step 2A Prong Two and Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. The claim is ineligible.
Even when considered in combination, these additional elements represent mere instructions to apply an exception and therefore do not provide an inventive concept. The claim is ineligible.
Claim 6
Step 1: A process, as above.
Step2A Prong 1: The claim recites, inter alia:
[M]asking, according to a masking rule, places passed by the user in a part of the at least one trajectory point of the at least one user, to obtain at least one masked trajectory point: This limitation is viewed as a mental process of masking data and obtaining that masked data.
and adjusting a network parameter…: This limitation is seen as a mental process since it deals with adjusting a parameter and not changing the model itself.
…of the pre-trained model according to a difference between the mask position and the masking rule…: This limitation is viewed as a mathematical concept as it deals with calculating the difference between two numbers.
Step2A Prong 2: This judicial exception is not integrated into a practical application because the
additional elements are as follows:
[I]nputting the at least one masked trajectory point into the…: Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity (MPEP 2106.05(g)).
…pre-trained model to obtain a mask position: Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)).
…to obtain the path representation model: Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)).
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows:
[I]nputting the at least one masked trajectory point into the…: The additional element of “receiving” does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of receiving steps amounts to no more than mere data gathering. This element amounts to receiving data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II (i). This cannot provide an inventive concept.
…pre-trained model to obtain a mask position: Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). This falls under Well-Understood, Routine, Conventional activity -see MPEP 2106.05(d)(II)(vi).
…to obtain the path representation model: Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). This falls under Well-Understood, Routine, Conventional activity -see MPEP 2106.05(d)(II)(vi).
Even when considered in combination, these additional elements represent mere instructions to apply an exception and therefore do not provide an inventive concept. The claim is ineligible.
Claim 7
Step 1: A process, as above.
Step2A Prong 1: The claim recites, inter alia:
[A]cquiring to-be-analyzed user trajectory information: This limitation encompasses a mental process dealing with acquiring trajectory data.
Step2A Prong 2: This judicial exception is not integrated into a practical application because the
additional elements are as follows:
[I]nputting the user trajectory information into the… and inputting the path representation into a…: Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity (MPEP 2106.05(g)).
…path representation model to output a path representation… prediction model to output a prediction result: Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)).
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows:
[I]nputting the user trajectory information into the… and inputting the path representation into a…: The additional element of “receiving” does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of receiving steps amounts to no more than mere data gathering. This element amounts to receiving data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II (i). This cannot provide an inventive concept.
…path representation model to output a path representation… prediction model to output a prediction result: Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). This falls under Well-Understood, Routine, Conventional activity -see MPEP 2106.05(d)(II)(vi).
Even when considered in combination, these additional elements represent mere instructions to apply an exception and therefore do not provide an inventive concept. The claim is ineligible.
Claim 8
Step 1: A process, as above.
Step2A Prong 1: The claim recites, inter alia:
the prediction result comprises at least one of: a path category, an abnormal event, a next position, or a schedule: This limitation is viewed as a mental process because it involves the mental classification of the result into specific categories (e.g., path, abnormal event, next position, schedule).
Step 2A Prong Two and Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. The claim is ineligible.
Claim 9
Step 1: The claim recites an apparatus; therefore, it is directed to the statutory category of
apparatus.
Step2A Prong 1: The claim recites, inter alia:
[A]cquiring at least one trajectory point of at least one user, wherein each trajectory point of each user comprises a place passed by the each user, a start time and a duration: This limitation encompasses a mental process of obtaining trajectory data points.
[O]btaining, for each user, a position of each trajectory point from the trajectory representation of the user by searching according to the start time and the duration of each trajectory point of the each user: This limitation encompasses a mental process of obtaining trajectory data point representations of the position of the user, which can be mentally performed.
[A]djusting a network parameter… according to a difference between the place passed by the each user and the position of the each trajectory point obtained by searching: This limitation is seen as a mental process since it deals with adjusting a parameter and not changing the model itself.
Step2A Prong 2: This judicial exception is not integrated into a practical application because the
additional elements are as follows:
[T]raining a path representation model, comprising: at least one processor; and a storage device, wherein the storage device stores instructions executable by the at least one processor, and the instructions when executed by the at least one processor cause the at least one processor to perform operations comprising: Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
[I]nputting the at least one trajectory point of the at least one user into a pre-trained model to obtain a trajectory representation of each user: Data Gather- Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity (MPEP 2106.05(g)).
…of the pre-trained model… to obtain the path representation model: Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows:
[T]raining a path representation model, comprising: at least one processor; and a storage device, wherein the storage device stores instructions executable by the at least one processor, and the instructions when executed by the at least one processor cause the at least one processor to perform operations comprising: Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and cannot provide inventive concept (MPEP 2106.05(f)).
[I]nputting the at least one trajectory point of the at least one user into a pre-trained model to obtain a trajectory representation of each user: The additional element of “receiving” does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of receiving steps amounts to no more than mere data gathering. This element amounts to receiving data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II (i). This cannot provide an inventive concept.
…of the pre-trained model… to obtain the path representation model: Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and cannot provide inventive concept (MPEP 2106.05(f)).
The elements in combination as an ordered whole still do not amount to significantly more than the judicial exception (i.e., the abstract ideas of obtaining trajectory data and mathematical concepts for calculating parameter differences). The claim merely describes a process of applying known data processing techniques (inputting trajectory points and searching by the start time and duration) and standard computing functions (training a model by adjusting parameters) using a generic computer.
Therefore, the claim as a whole remains focused on the abstract idea and fails Step 2B of the eligibility analysis.
Claims 10-16
Step 1: Claims 10-16 recite an apparatus; therefore, it is directed to the statutory category of apparatus.
Step 2A Prong 1: Claims 10-16 recite judicial exceptions similar to those in claims 2-8.
Step 2A Prong 2: The judicial exceptions are not integrated into practical application. The analysis at this step mirrors that of claims 2-8 respectively, except insofar as claim 15 additionally recites:
a predicting unit: Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
Step 2B: The judicial exception is not integrated into a practical application. The analysis at this step
mirrors that of claims 2-7, respectively, except insofar as claim 15 additionally recites:
a predicting unit: Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and cannot provide inventive concept (MPEP 2106.05(f)).
Even when considered in combination, these additional elements represent mere instructions to apply
an exception and therefore do not provide an inventive concept. The claim is ineligible.
Claim 17
Step 1: The claim recites a non-transitory computer medium; therefore, it is directed to the statutory category of manufacture.
Step2A Prong 1: The claim recites, inter alia:
[A]cquiring at least one trajectory point of at least one user, wherein each trajectory point of each user comprises a place passed by the each user, a start time and a duration: This limitation encompasses a mental process of obtaining trajectory data points.
[O]btaining, for each user, a position of each trajectory point from the trajectory representation of the user by searching according to the start time and the duration of each trajectory point of the each user: This limitation encompasses a mental process of obtaining trajectory data point representations of the position of the user, which can be mentally performed.
[A]djusting a network parameter… according to a difference between the place passed by the each user and the position of the each trajectory point obtained by searching: This limitation is seen as a mental process since it deals with adjusting a parameter and not changing the model itself.
Step2A Prong 2: This judicial exception is not integrated into a practical application because the
additional elements are as follows:
[S]toring a computer instruction, wherein the computer instruction when executed by a computer causes the computer to perform operations comprising: Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
[I]nputting the at least one trajectory point of the at least one user into a pre-trained model to obtain a trajectory representation of each user: Data Gather- Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity (MPEP 2106.05(g)).
…of the pre-trained model… to obtain the path representation model: Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows:
[S]toring a computer instruction, wherein the computer instruction when executed by a computer causes the computer to perform operations comprising: Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and cannot provide inventive concept (MPEP 2106.05(f)).
[I]nputting the at least one trajectory point of the at least one user into a pre-trained model to obtain a trajectory representation of each user: The additional element of “receiving” does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of receiving steps amounts to no more than mere data gathering. This element amounts to receiving data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II (i). This cannot provide an inventive concept.
…of the pre-trained model… to obtain the path representation model: Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and cannot provide inventive concept (MPEP 2106.05(f)).
The elements in combination as an ordered whole still do not amount to significantly more than the judicial exception (i.e., the abstract ideas of obtaining trajectory data and mathematical concepts for calculating parameter differences). The claim merely describes a process of applying known data processing techniques (inputting trajectory points and searching by the start time and duration) and standard computing functions (training a model by adjusting parameters) using a generic computer.
Therefore, the claim as a whole remains focused on the abstract idea and fails Step 2B of the eligibility analysis.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 6-9, and 14-17 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou (“Contrastive Trajectory Learning for Tour Recommendation”, 2021) in view of Huberman (US 12509108 B2).
Regarding claim 1,
Zhou teaches [a] method for training a path representation model, comprising: acquiring at least one trajectory point of at least one user (Page 4 of Introduction, “To address the above issues, we propose a novel tour recommendation method Contrastive Trajectory Learning for Tour Recommendation (CTLTR)… CTLTR is a deep neural network based recommendation model, leveraging RNN as the basic building block but introducing complementary training objectives to improve the model performance… We propose a new auxiliary training objective to enhance the recommendation accuracy and enrich trip representations…”, Page 6 Paragraph 3, “Due to the impressive performance achieved by deep neural networks (DNN) in a broad range of tasks, a few recent works attempted to capture complex trajectory data and human mobility with various DNNs. DeepTrip [17] models historical check-in sequence with RNN and adopts an auxiliary network to learn the latent representations of tourists’ trajectories”, Definition 3.1 on Page 8, “(Tour Recommendation). INPUT: A user-provided query consisting of the desired start point ls and start time ts , the length of the trip N (i.e., the number of POIs to visit), and the end point le at time te . OUTPUT: The tour recommender system returns a tour route T =(l1 = ls ,l2,l3,...,lN = le )”
Contrastive Trajectory Learning for Tour Recommendation (CTLTR) is a tour representation model that utilizes geolocated social network data to define user/tourist trajectories as sequences of visited Points of Interest (POIs) associated with spatial-temporal metadata. The path representation corresponds to the trajectory embedding representation learned by the fine-tuned model from which the output route is produced.),
wherein each trajectory point of each user comprises a place passed by the each user, a start time and a duration (Page 6 Paragraph 3, “DeepTrip [17] models historical check-in sequence with RNN and adopts an auxiliary network to learn the latent representations of tourists’ trajectories…”, Definition 3.1 on Page 8, “(Tour Recommendation). INPUT: A user-provided query consisting of the desired start point ls and start time ts , the length of the trip N (i.e., the number of POIs to visit), and the end point le at time te.”, Page 11 Section 4.1.2 Spatial and Temporal Contexts of POIs, “Following Reference [17], we encode the spatialtemporal context of each location in a trajectory by incorporating the geographical and temporal constraints imposed by the start point and end points. That is, the current-time geographical distance u(li,τ ) of a particular POI visiting li,τ (i.e., a tourist visits li at time τ ) is calculated by the following:
PNG
media_image1.png
41
423
media_image1.png
Greyscale
where d(·, ·) denotes the Euclidean distance between two locations…The rationale behind Equation (8) is to account for the relative distance constraints imposed by the start and end POIs… We note that other contexts and constraints, e.g., duration time and queuing time, can be encoded in a similar way.”
Each trajectory point inputted by the user is used to encode the trajectory point. Equation 8 calculates the Euclidean distance between l_(i,r) which represents the tourist visiting a place/location at time r and l_s, t_s which represents the tourist visiting start point at the start time. The trajectory points that every user inputs into the CTLTR model always contains a duration since Equation 8 calculates the distance from the start point to the location the tourist visiting incorporating the time started and the time the tourist arrived at the destination of the location.);
inputting the at least one trajectory point of the at least one user into a pre-trained model to obtain a trajectory representation of each user (Page 10 Under Section 4.1, “The Base model serves as a basic supervised framework to encode the trajectories into latent representations containing semantic relationships and sequential visiting patterns between POIs.”, Page 6 Paragraph 3, “DeepTrip [17] models historical check-in sequence with RNN and adopts an auxiliary network to learn the latent representations of tourists’ trajectories”, Page 12 Section 4.2, “Once the CTLTR model is pre-trained, it can be used as fine-tuning on the tour recommendation problem.”
The CTLTR framework utilizes a hierarchical RNN referred to as the Base model which ingests sequences of trajectory points from tourists/ users to produce low-dimensional representations to capture user-specific mobility patterns for personalized recommendation.);
and adjusting a network parameter of the pre-trained model according to a difference between the place passed by the each user and the position of the each trajectory point obtained by searching, to obtain the path representation model (Page 14 Section 4.3.3, “We use the cross-entropy loss function to optimize the model. Specifically, for a certain trajectory T , the loss is calculated by the following:
PNG
media_image2.png
68
499
media_image2.png
Greyscale
where N is the length of the trajectory, li is the ith ground truth POI, and ˆ li is the predicted POI.”, Page 4 Introduction, “CTLTR is a deep neural network based recommendation model, leveraging RNN as the basic building block but introducing complementary training objectives to improve the model performance. Specifically, CTLTR splits each trajectory into multiple subsequences in a recursive manner, which preserves the coherent motion patterns and significantly augments the trajectory training data”, Page 8 Section 3.1, “OUTPUT: The tour recommender system returns a tour route T =(l1 = ls ,l2,l3,...,lN = le ).”
Zhou fine-tunes a pre-trained neural network by adjusting network parameters using a loss function that explicitly measures the difference between the ground-truth places actually passed by the user and the predicted positions (POIs) along the trajectory. By minimizing the cross-entropy loss over trajectory subsequences, the model updates its parameters based on the ground truth and the predicted result, thereby producing a path representation model. The path representation corresponds to the trajectory embedding representation learned by the fine-tuned model from which the output route is produced.).
Zhou does not teach obtaining, for each user, a position of each trajectory point from the trajectory representation of the user by searching according to the start time and the duration of each trajectory point of the each user; and adjusting a network parameter of the pre-trained model according to a difference between the place passed by the each user and the position of the each trajectory point obtained by searching, to obtain the path representation model.
Huberman, in the same field of endeavor, teaches obtaining, for each user, a position of each trajectory point from the trajectory representation of the user by searching according to the start time and the duration of each trajectory point of the each user (Paragraph 230, “ Each vehicle 200 may collect data relating to a path that the vehicle took along the road segment. The path traveled by a particular vehicle may be determined based on camera data, accelerometer information, speed sensor information, and/or GPS information, among other potential sources.”, Paragraph 248, “The geometry of a reconstructed trajectory (and also a target trajectory) along a road segment may be represented by a curve in three dimensional space, which may be a spline connecting three dimensional polynomials. The reconstructed trajectory curve may be determined from analysis of a video stream or a plurality of images captured by a camera installed on the vehicle. In some embodiments, a location is identified in each frame or image that is a few meters ahead of the current position of the vehicle. This location is where the vehicle is expected to travel to in a predetermined time period. This operation may be repeated frame by frame, and at the same time, the vehicle may compute the camera's ego motion (rotation and translation). At each frame or image, a short range model for the desired path is generated by the vehicle in a reference frame that is attached to the camera.”
This reference describes reconstructing and representing a user’s vehicle’s trajectory as a time-based sequence of trajectory points, where each point corresponds to a predicted position at a particular time interval derived from sensor data. By determining the vehicle’s position frame-by-frame over predetermined time periods, Huberman teaches obtaining the position of each trajectory point by searching the trajectory representation according to the start time and duration of each point.);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the teaching of Zhou’s CTLTR framework for learning trajectory embeddings with Huberman’s method for obtaining positions of trajectory points based on start times, durations, and sensor data in order to generate trajectory representations used for more accurate and context-aware path predictions (Paragraph 2 of Huberman).
Regarding claim 6,
Zhou teaches masking, according to a masking rule, places passed by the user in a part of the at least one trajectory point of the at least one user, to obtain at least one masked trajectory point (Page 14 Section 4.3.2, “Therefore, we propose to model the segment-trajectory correlation in a similar way, i.e., define the pretext task as a subsequence Cloze problem. Consider a sequence of POIs {lj,tj ,...,lj+n,tj+n } with length n +1 ∈ [1, N −2]. We mask the subsequence [mask1,mask2,...] in the original trajectory T. Then, we predict the masked segment based on the surrounding context T s = {l1,t1 ,..., [mask1,mask2,...],...,lN,tN }. The model is also optimized by a similarity loss function based on mutual information maximization”, Page 5 Section 2.1, “Tour recommendation (TR) refers to customizing trajectory plans for users, generally including starting location, destination(s), and the number of places to visit—typically accompanied by other constraints such as specified times, budget, transportation means, and so on.”
Zhou implements a subsequence Cloze problem where specific POIs (places passed by the user) are hidden within a trajectory sequence to create “masked trajectory points.” The model then inputs this masked context to predict the hidden segments, using a similarity loss function to calculate the difference between the prediction and the original data. This process adjusts the network parameters via mutual information maximization, effectively training the system to become the functional path representation model.);
inputting the at least one masked trajectory point into the pre-trained model to obtain a mask position (Page 11 and 12 Section 4.1.3, “The former aims to model the existing trajectories and generate the corresponding hidden state representation by utilizing recurrent neural networks. In CTLTR, we select the long short-term memory (LSTM) [28] as the basic recurrent unit to model the temporal dependencies among POI trajectories. Similarly, a POI recommender based on LSTM is used to generate the next recommended POI given the background knowledge and all past POIs as input… hPR t are the LSTM hidden state vectors of the query constructor and the POI recommender”
Zhou’s model utilizes an LSTM-based Query Constructor to process trajectory data and generate hidden state vectors that represent the specific temporal dependencies of the sequence. These hidden states allow the POI Recommender to target the missing information at that exact point in the sequence.);
and adjusting a network parameter of the pre-trained model according to a difference between the mask position and the masking rule, to obtain the path representation model (Page 12 Section 4.2, “We used self-supervised signals to minimize the pre-training losses via mutual information maximization (MIM).”, Page 8 Section 3.1, “Mutual information (MI) is a Shannon entropy-based measurement of random variable dependencies [2], i.e., given two variables X and Y…
PNG
media_image3.png
51
514
media_image3.png
Greyscale
… Consider a classification problem that aims to predict the label y by giving an input variable According to the Fano’s inequality [54]:
PNG
media_image4.png
47
511
media_image4.png
Greyscale
… where y is the true label, yˆ is the predicted label”, Page 4 Introduction, “CTLTR is a deep neural network based recommendation model, leveraging RNN as the basic building block but introducing complementary training objectives to improve the model performance. Specifically, CTLTR splits each trajectory into multiple subsequences in a recursive manner, which preserves the coherent motion patterns and significantly augments the trajectory training data”, Page 8 Section 3.1, “OUTPUT: The tour recommender system returns a tour route T =(l1 = ls ,l2,l3,...,lN = le ).”
Zhou uses Mutual Information Maximization (MIM) and Shannon entropy-based measurements to calculate the “difference” (loss) between the predicted masked segment and the true label. By minimizing this pre-training loss through self-supervised signals, the system adjusts its network parameters to align the model’s output with the ground truth, transforming the RNN into a more refined path representation model. Essentially, the “masking rule” represents the ground truth that the model is trying to recover, while the “mask position” is the prediction made from the model. By using MIM, Zhou calculates the statistical difference between the prediction and ground truth, and then uses this value to update the model’s weights.).
Regarding claim 7,
Zhou teaches acquiring to-be-analyzed user trajectory information; inputting the user trajectory information into the path representation model, to output a path representation; and inputting the path representation into a prediction model to output a prediction result (See Figure 1 on Page 10,
PNG
media_image5.png
441
710
media_image5.png
Greyscale
, Page 10 Figure 1 Caption, “(1) It first encodes POIs into low-dimensional embeddings while also considering spatial and temporal contexts of POIs; (2) the designed trajectory data augmentation procedure greatly expands the pre-training samples by creating sub-trajectories; (3) a hierarchical Base model formed by two LSTM networks (query constructor and POI recommender); (4) two pretext tasks (POI-trajectory correlation and segment-trajectory correlation) based on mutual information maximization, enabling us to pre-train the CTLTR model without label information; and (5) a fine-tuned prediction layer for tour recommendation.”
Zhou’s model acquires raw trajectory data and augments it into sub-trajectories, which are then processed by a base model (Query Constructor) which encodes the path into latent representations. These path representations are optimized via Mutual Information Maximization (MIM) and subsequently fed into a fine-tuned prediction layer that utilizes similarity matching and SoftMax to output a recommended tour.).
Regarding claim 8,
Zhou teaches the prediction result comprises at least one of: a path category, an abnormal event, a next position, or a schedule (Page 6 Under Section 2.3, “Recurrent neural networks (RNN) have been successfully applied in many sequential data, such as machine translation [3], click prediction [87] and text classification [35]. ST-RNN [43] models spatio-temporal data using RNN for next location prediction.”
Here, Zhou teaches that the prediction result is based on the next position/ location which satisfies the claim limitation of the prediction result containing at least one of those categories.).
Regarding claim 9,
Zhou teaches for training a path representation model… acquiring at least one trajectory point of at least one user (Page 4 of Introduction, “To address the above issues, we propose a novel tour recommendation method Contrastive Trajectory Learning for Tour Recommendation (CTLTR)… CTLTR is a deep neural network based recommendation model, leveraging RNN as the basic building block but introducing complementary training objectives to improve the model performance… We propose a new auxiliary training objective to enhance the recommendation accuracy and enrich trip representations…”, Page 6 Paragraph 3, “Due to the impressive performance achieved by deep neural networks (DNN) in a broad range of tasks, a few recent works attempted to capture complex trajectory data and human mobility with various DNNs. DeepTrip [17] models historical check-in sequence with RNN and adopts an auxiliary network to learn the latent representations of tourists’ trajectories”, Definition 3.1 on Page 8, “(Tour Recommendation). INPUT: A user-provided query consisting of the desired start point ls and start time ts , the length of the trip N (i.e., the number of POIs to visit), and the end point le at time te . OUTPUT: The tour recommender system returns a tour route T =(l1 = ls ,l2,l3,...,lN = le )”
Contrastive Trajectory Learning for Tour Recommendation (CTLTR) is a tour representation model that utilizes geolocated social network data to define user/tourist trajectories as sequences of visited Points of Interest (POIs) associated with spatial-temporal metadata. The path representation corresponds to the trajectory embedding representation learned by the fine-tuned model from which the output route is produced.),
wherein each trajectory point of each user comprises a place passed by the each user, a start time and a duration (Page 6 Paragraph 3, “DeepTrip [17] models historical check-in sequence with RNN and adopts an auxiliary network to learn the latent representations of tourists’ trajectories…”, Definition 3.1 on Page 8, “(Tour Recommendation). INPUT: A user-provided query consisting of the desired start point ls and start time ts , the length of the trip N (i.e., the number of POIs to visit), and the end point le at time te.”, Page 11 Section 4.1.2 Spatial and Temporal Contexts of POIs, “Following Reference [17], we encode the spatialtemporal context of each location in a trajectory by incorporating the geographical and temporal constraints imposed by the start point and end points. That is, the current-time geographical distance u(li,τ ) of a particular POI visiting li,τ (i.e., a tourist visits li at time τ ) is calculated by the following:
PNG
media_image1.png
41
423
media_image1.png
Greyscale
where d(·, ·) denotes the Euclidean distance between two locations…The rationale behind Equation (8) is to account for the relative distance constraints imposed by the start and end POIs… We note that other contexts and constraints, e.g., duration time and queuing time, can be encoded in a similar way.”
Each trajectory point inputted by the user is used to encode the trajectory point. Equation 8 calculates the Euclidean distance between l_(i,r) which represents the the tourist visiting a place/location at time r and l_s, t_s which represents the tourist visiting start point at the start time. The trajectory points that every user inputs into the CTLTR model always contains a duration since Equation 8 calculates the distance from the start point to the location the tourist visiting incorporating the time started and the time the tourist arrived at the destination of the location.);
inputting the at least one trajectory point of the at least one user into a pre-trained model to obtain a trajectory representation of each user (Page 10 Under Section 4.1, “The Base model serves as a basic supervised framework to encode the trajectories into latent representations containing semantic relationships and sequential visiting patterns between POIs.”, Page 6 Paragraph 3, “DeepTrip [17] models historical check-in sequence with RNN and adopts an auxiliary network to learn the latent representations of tourists’ trajectories”, Page 12 Section 4.2, “Once the CTLTR model is pre-trained, it can be used as fine-tuning on the tour recommendation problem.”
The CTLTR framework utilizes a hierarchical RNN referred to as the Base model which ingests sequences of trajectory points from tourists/ users to produce low-dimensional representations to capture user-specific mobility patterns for personalized recommendation.);
and adjusting a network parameter of the pre-trained model according to a difference between the place passed by the each user and the position of the each trajectory point obtained by searching, to obtain the path representation model (Page 14 Section 4.3.3, “We use the cross-entropy loss function to optimize the model. Specifically, for a certain trajectory T , the loss is calculated by the following:
PNG
media_image2.png
68
499
media_image2.png
Greyscale
where N is the length of the trajectory, li is the ith ground truth POI, and ˆ li is the predicted POI.”, Page 4 Introduction, “CTLTR is a deep neural network based recommendation model, leveraging RNN as the basic building block but introducing complementary training objectives to improve the model performance. Specifically, CTLTR splits each trajectory into multiple subsequences in a recursive manner, which preserves the coherent motion patterns and significantly augments the trajectory training data”, Page 8 Section 3.1, “OUTPUT: The tour recommender system returns a tour route T =(l1 = ls ,l2,l3,...,lN = le ).”
Zhou fine-tunes a pre-trained neural network by adjusting network parameters using a loss function that explicitly measures the difference between the ground-truth places actually passed by the user and the predicted positions (POIs) along the trajectory. By minimizing the cross-entropy loss over trajectory subsequences, the model updates its parameters based on the ground truth and the predicted result, thereby producing a path representation model. The path representation corresponds to the trajectory embedding representation learned by the fine-tuned model from which the output route is produced.).
Zhou does not teach [a]n apparatus… comprising: at least one processor; and a storage device, wherein the storage device stores instructions executable by the at least one processor, and the instructions when executed by the at least one processor cause the at least one processor to perform operations… and obtaining, for each user, a position of each trajectory point from the trajectory representation of the user by searching according to the start time and the duration of each trajectory point of the each user.
Huberman, in the same field of endeavor, teaches [a]n apparatus… comprising: at least one processor; and a storage device, wherein the storage device stores instructions executable by the at least one processor, and the instructions when executed by the at least one processor cause the at least one processor to perform operations comprising (Paragraph 197 of Huberman, “In some embodiments, sparse map 800 may be stored on a storage device or a non-transitory computer-readable medium provided onboard vehicle 200 (e.g., a storage device included in a navigation system onboard vehicle 200). A processor (e.g., processing unit 110) provided on vehicle 200 may access sparse map 800 stored in the storage device or computer-readable medium provided onboard vehicle 200 in order to generate navigational instructions for guiding the autonomous vehicle 200 as the vehicle traverses a road segment.”):
obtaining, for each user, a position of each trajectory point from the trajectory representation of the user by searching according to the start time and the duration of each trajectory point of the each user (Paragraph 230 of Huberman, “ Each vehicle 200 may collect data relating to a path that the vehicle took along the road segment. The path traveled by a particular vehicle may be determined based on camera data, accelerometer information, speed sensor information, and/or GPS information, among other potential sources.”, Paragraph 248, “The geometry of a reconstructed trajectory (and also a target trajectory) along a road segment may be represented by a curve in three dimensional space, which may be a spline connecting three dimensional polynomials. The reconstructed trajectory curve may be determined from analysis of a video stream or a plurality of images captured by a camera installed on the vehicle. In some embodiments, a location is identified in each frame or image that is a few meters ahead of the current position of the vehicle. This location is where the vehicle is expected to travel to in a predetermined time period. This operation may be repeated frame by frame, and at the same time, the vehicle may compute the camera's ego motion (rotation and translation). At each frame or image, a short range model for the desired path is generated by the vehicle in a reference frame that is attached to the camera.”
This reference describes reconstructing and representing a user’s vehicle’s trajectory as a time-based sequence of trajectory points, where each point corresponds to a predicted position at a particular time interval derived from sensor data. By determining the vehicle’s position frame-by-frame over predetermined time periods, Huberman teaches obtaining the position of each trajectory point by searching the trajectory representation according to the start time and duration of each point.);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the teaching of Zhou’s CTLTR framework for learning trajectory embeddings with Huberman’s apparatus and processor(s) to perform the method for obtaining positions of trajectory points based on start times, durations, and sensor data in order to generate trajectory representations used for more accurate and context-aware path predictions (Paragraph 2 of Huberman).
Claim 14 is an apparatus corresponding to method claim 6 and is rejected using the same rationale as claim 6.
Regarding claim 15, the ‘predicting unit’ limitation is met by Zhou’s fine-tuned prediction layer (Step 5), which inputs path representations from the base model to output a tour recommendation. Zhou’s prediction architecture is the functional equivalent of the claimed unit. Claim 15 is an apparatus corresponding to method claim 7 and is rejected using the same rationale as claim 7.
Claim 16 is an apparatus corresponding to method claim 8 and is rejected using the same rationale as claim 8.
Regarding claim 17,
Zhou teaches acquiring at least one trajectory point of at least one user (Page 4 of Introduction, “To address the above issues, we propose a novel tour recommendation method Contrastive Trajectory Learning for Tour Recommendation (CTLTR)… CTLTR is a deep neural network based recommendation model, leveraging RNN as the basic building block but introducing complementary training objectives to improve the model performance… We propose a new auxiliary training objective to enhance the recommendation accuracy and enrich trip representations…”, Page 6 Paragraph 3, “Due to the impressive performance achieved by deep neural networks (DNN) in a broad range of tasks, a few recent works attempted to capture complex trajectory data and human mobility with various DNNs. DeepTrip [17] models historical check-in sequence with RNN and adopts an auxiliary network to learn the latent representations of tourists’ trajectories”, Definition 3.1 on Page 8, “(Tour Recommendation). INPUT: A user-provided query consisting of the desired start point ls and start time ts , the length of the trip N (i.e., the number of POIs to visit), and the end point le at time te . OUTPUT: The tour recommender system returns a tour route T =(l1 = ls ,l2,l3,...,lN = le )”
Contrastive Trajectory Learning for Tour Recommendation (CTLTR) is a tour representation model that utilizes geolocated social network data to define user/tourist trajectories as sequences of visited Points of Interest (POIs) associated with spatial-temporal metadata. The path representation corresponds to the trajectory embedding representation learned by the fine-tuned model from which the output route is produced.),
wherein each trajectory point of each user comprises a place passed by the each user, a start time and a duration (Page 6 Paragraph 3, “DeepTrip [17] models historical check-in sequence with RNN and adopts an auxiliary network to learn the latent representations of tourists’ trajectories…”, Definition 3.1 on Page 8, “(Tour Recommendation). INPUT: A user-provided query consisting of the desired start point ls and start time ts , the length of the trip N (i.e., the number of POIs to visit), and the end point le at time te.”, Page 11 Section 4.1.2 Spatial and Temporal Contexts of POIs, “Following Reference [17], we encode the spatialtemporal context of each location in a trajectory by incorporating the geographical and temporal constraints imposed by the start point and end points. That is, the current-time geographical distance u(li,τ ) of a particular POI visiting li,τ (i.e., a tourist visits li at time τ ) is calculated by the following:
PNG
media_image1.png
41
423
media_image1.png
Greyscale
where d(·, ·) denotes the Euclidean distance between two locations…The rationale behind Equation (8) is to account for the relative distance constraints imposed by the start and end POIs… We note that other contexts and constraints, e.g., duration time and queuing time, can be encoded in a similar way.”
Each trajectory point inputted by the user is used to encode the trajectory point. Equation 8 calculates the Euclidean distance between l_(i,r) which represents the the tourist visiting a place/location at time r and l_s, t_s which represents the tourist visiting start point at the start time. The trajectory points that every user inputs into the CTLTR model always contains a duration since Equation 8 calculates the distance from the start point to the location the tourist visiting incorporating the time started and the time the tourist arrived at the destination of the location.);
inputting the at least one trajectory point of the at least one user into a pre-trained model to obtain a trajectory representation of each user (Page 10 Under Section 4.1, “The Base model serves as a basic supervised framework to encode the trajectories into latent representations containing semantic relationships and sequential visiting patterns between POIs.”, Page 6 Paragraph 3, “DeepTrip [17] models historical check-in sequence with RNN and adopts an auxiliary network to learn the latent representations of tourists’ trajectories”, Page 12 Section 4.2, “Once the CTLTR model is pre-trained, it can be used as fine-tuning on the tour recommendation problem.”
The CTLTR framework utilizes a hierarchical RNN referred to as the Base model which ingests sequences of trajectory points from tourists/ users to produce low-dimensional representations to capture user-specific mobility patterns for personalized recommendation.);
and adjusting a network parameter of the pre-trained model according to a difference between the place passed by the each user and the position of the each trajectory point obtained by searching, to obtain the path representation model (Page 14 Section 4.3.3, “We use the cross-entropy loss function to optimize the model. Specifically, for a certain trajectory T , the loss is calculated by the following:
PNG
media_image2.png
68
499
media_image2.png
Greyscale
where N is the length of the trajectory, li is the ith ground truth POI, and ˆ li is the predicted POI.”, Page 4 Introduction, “CTLTR is a deep neural network based recommendation model, leveraging RNN as the basic building block but introducing complementary training objectives to improve the model performance. Specifically, CTLTR splits each trajectory into multiple subsequences in a recursive manner, which preserves the coherent motion patterns and significantly augments the trajectory training data”, Page 8 Section 3.1, “OUTPUT: The tour recommender system returns a tour route T =(l1 = ls ,l2,l3,...,lN = le ).”
Zhou fine-tunes a pre-trained neural network by adjusting network parameters using a loss function that explicitly measures the difference between the ground-truth places actually passed by the user and the predicted positions (POIs) along the trajectory. By minimizing the cross-entropy loss over trajectory subsequences, the model updates its parameters based on the ground truth and the predicted result, thereby producing a path representation model. The path representation corresponds to the trajectory embedding representation learned by the fine-tuned model from which the output route is produced.).
Zhou does not teach [a] non-transitory computer readable storage medium, storing a computer instruction, wherein the computer instruction when executed by a computer causes the computer to perform operations comprising…obtaining, for each user, a position of each trajectory point from the trajectory representation of the user by searching according to the start time and the duration of each trajectory point of the each user.
Huberman, in the same field of endeavor, teaches [a] non-transitory computer readable storage medium, storing a computer instruction, wherein the computer instruction when executed by a computer causes the computer to perform operations comprising (Paragraph 16 of Huberman, “In an embodiment, a non-transitory computer-readable medium may store instructions that, when executed by at least one processing device, cause the device to perform a method comprising receiving from an image capture device one or more images representative of an environment of a vehicle.”):
obtaining, for each user, a position of each trajectory point from the trajectory representation of the user by searching according to the start time and the duration of each trajectory point of the each user (Paragraph 230 of Huberman, “ Each vehicle 200 may collect data relating to a path that the vehicle took along the road segment. The path traveled by a particular vehicle may be determined based on camera data, accelerometer information, speed sensor information, and/or GPS information, among other potential sources.”, Paragraph 248, “The geometry of a reconstructed trajectory (and also a target trajectory) along a road segment may be represented by a curve in three dimensional space, which may be a spline connecting three dimensional polynomials. The reconstructed trajectory curve may be determined from analysis of a video stream or a plurality of images captured by a camera installed on the vehicle. In some embodiments, a location is identified in each frame or image that is a few meters ahead of the current position of the vehicle. This location is where the vehicle is expected to travel to in a predetermined time period. This operation may be repeated frame by frame, and at the same time, the vehicle may compute the camera's ego motion (rotation and translation). At each frame or image, a short range model for the desired path is generated by the vehicle in a reference frame that is attached to the camera.”
This reference describes reconstructing and representing a user’s vehicle’s trajectory as a time-based sequence of trajectory points, where each point corresponds to a predicted position at a particular time interval derived from sensor data. By determining the vehicle’s position frame-by-frame over predetermined time periods, Huberman teaches obtaining the position of each trajectory point by searching the trajectory representation according to the start time and duration of each point.);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the teaching of Zhou’s CTLTR framework for learning trajectory embeddings with Huberman’s apparatus and processor(s) to perform the method for obtaining positions of trajectory points based on start times, durations, and sensor data in order to generate trajectory representations used for more accurate and context-aware path predictions (Paragraph 2 of Huberman).
Claims 2-5 and 10-13 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou (“Contrastive Trajectory Learning for Tour Recommendation”, 2021) in view of Li (CN-113498070-A).
Regarding claim 2,
Zhou does not teach acquiring a sample set, a sample in the sample set comprising a sample trajectory and a tag; and using respectively the sample trajectory and the tag in the sample set as an input and an expected output of the path representation model, to perform supervised training on the path representation model.
Li, in the same field of endeavor, teaches acquiring a sample set (Last Paragraph of Page 2, “…the AP prediction model is a model obtained based on historical roaming path information training of a plurality of mobile devices”, Paragraph 4 of Page 3, “…after training to obtain the AP prediction model, along with the accumulation of historical roaming path information obtained by the data analyzer, the data analyzer can also perform increment update of AP prediction model…”
Li discloses training an AP prediction model using historical roaming path information from a plurality of mobile devices, which involves acquiring a set of training samples. Li further describes accumulating additional historical roaming path information for incremental model updates, confirming the continued use of the sample set further down the pipeline.)
a sample in the sample set comprising a sample trajectory and a tag (Paragraph 5 of Page 32, “The historical roaming path information of any one of the plurality of mobile devices includes an identification of an AP for reflecting a historical roaming path… in sequence”, Paragraph 6 of Page 14, “For example, assuming that the historical roaming information X obtained by the data analyzer comprises an identifier of AP1 to AP4 arranged in turn”, Paragraph 2 of Page 21,“…the representation mode of the roaming path information may be… time sequence representation… ordered data set mode, such as (AP2, AP3); also can be represented by arrowhead connection”, Last Paragraph of Page 3, “the expected output (i.e., tag) comprises the actual roaming AP of the second AP in the history roaming path information”, Paragraph 2 of Page 19, “the tag can be a predicted classification result (called classification label), namely the predicted roaming AP”, Paragraph 2 of Page 20, “AP1 to AP4 arranged in turn… is the identifier of AP1, then the second AP is AP1, the label is AP2… AP1 and AP2, then the second AP is AP2, the tag is AP3; or the identification of… AP1 to AP3, then the second AP is AP3, the tag is AP4.”
Li discloses that each training sample includes historical path information represented as a sequence of AP identifiers, which represents a sample trajectory. Li further discloses that each trajectory is associated with a tag to the actual roaming AP (classification label) following the sequence. Each training sample is formed by taking a sequential portion of historical roaming path information (sample trajectory) and assigning it as the tag which is the next roaming AP that follows that sequence in the historical data.);
and using respectively the sample trajectory and the tag in the sample set as an input and an expected output of the path representation model (Last Paragraph of Page 3, “…the continuous non-end feature data in the history roaming path information is the input data; correspondingly, the expected output (i.e., tag) comprises the actual roaming AP of the second AP in the history roaming path information (i.e., the last AP of the second AP recorded in the history roaming path information);”
Li discloses using continuous, non-end historical roaming path information as input data to the AP prediction model, which corresponds to the sample trajectory. Li further taches that the expected output of the model is the tag, namely the actual roaming AP following the trajectory, thereby using the trajectory and tag as the model input and expected output.),
to perform supervised training on the path representation model (Paragraph 2 of Page 19, “the machine learning algorithm can be divided into supervisby learning algorithm… the supervised learning algorithm …sample data… is composed of input data and expected output… called tag”, Paragraph 7 of Page 4,“The AP prediction model is obtained based on historical roaming path information training of the plurality of mobile devices.”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the teaching of Zhou’s CTLTR framework for learning trajectory representation embeddings with Li’s teaching of training an AP prediction model using historical roaming path information, where each trajectory is paired with a next-AP tag and used as input and expected output for supervised learning. It would have been obvious to combine Zhou and Li’s teachings in order to generate sequence embeddings and improve the predictive accuracy of the next position (Page 7 Paragraph 5 of Li).
Regarding claim 3,
Zhou teaches dividing, for a target sample trajectory…, the target sample trajectory into at least one segment… (Page 14 Section Modeling Segment-trajectory Correlation 4.3.2, “In addition, to maximize the mutual information between POI and trajectory, we extend the POI-trajectory correlation to segment-trajectory correlation, which models a sequence of POIs (segment) with its surrounding context… Therefore, we propose to model the segment-trajectory correlation in a similar way, i.e., define the pretext task as a subsequence Cloze problem. Consider a sequence of POIs {lj,tj ,...,lj+n,tj+n } with length n +1 ∈ [1, N −2]. We mask the subsequence [mask1,mask2,...] in the original trajectory T . Then, we predict the masked segment based on the surrounding context T s = {l1,t1 ,..., [mask1,mask2,...],...,lN,tN }. The model is also optimized by a similarity loss function based on mutual information maximization”
Zhou’s path representation model learning method uses the Cloze problem which effectively divides the trajectory into two subsets: a masked segment (S_(j,n)) and the remaining unmasked context (T_s). The model is then trained to predict the masked segment based on the unmasked segments to enhance prediction and optimize the network parameters.);
inputting, for each target sample trajectory, at least one segment of the target sample trajectory into the path representation model to obtain a representation of each segment of the target sample trajectory (Page 14 Section Modeling Segment-trajectory Correlation 4.3.2, “Then, we predict the masked segment based on the surrounding context T s = {l1,t1 ,..., [mask1,mask2,...],...,lN,tN }. The model is also optimized by a similarity loss function based on mutual information maximization
PNG
media_image6.png
84
527
media_image6.png
Greyscale
…Similarly to Equation (16), the mutual information between the context and the trajectory segment can be computed as
PNG
media_image7.png
54
465
media_image7.png
Greyscale
”
Zhou explicitly divides each target sample trajectory into at least one segment by masking a subsequence (the segment) and separating it from the remaining unmasked context, creating distinct trajectory segments used during learning. The path representation model then inputs these trajectory segments to obtain segment-level representations and optimizes them via mutual information maximization (MIM) which corresponds to the instant’s claim limitation of inputting at least one segment of a target sample trajectory into the model to obtain a representation.);
and constructing, for each target sample trajectory, the representation of each segment into a sequence of representations of the target sample trajectory (Page 10 Figure 1 Caption, “…our proposed CTLTR framework, which is composed of the following five main modules: (1) It first encodes POIs into low-dimensional embeddings while also considering spatial and temporal contexts of POIs; (2) the designed trajectory data augmentation procedure greatly expands the pre-training samples by creating sub-trajectories;”, Page 14 Section 4.3.2, “we extend the POI-trajectory correlation to segment-trajectory correlation, which models a sequence of POIs (segment) with its surrounding context… The model is also optimized by a similarity loss function based on mutual information maximization”
Zhou creates sub-trajectories (segments) from each target sample trajectory and encodes them into low-dimensional representations, thereby constructing representations of each segment of the trajectory. These segment representations form a sequence of representations corresponding to the original trajectory, as shown by modeling segment-trajectory correlation over sequences of POIs and optimizing them via MIM.),
and inputting the sequence and a time identifier corresponding to each segment into a sequence model (Page 12 Section 4.1.2, “In CTLTR, we select the long short-term memory (LSTM) [28] as the basic recurrent unit to model the temporal dependencies among POI trajectories… The POI generating process is executed in a recursive manner. Specifically, for a running round at time t ∈ [2, N − 1], we concatenate the historical information hQC t−1 and the length embedding rt (obtained by random initialization) to form the current query… where Eτs and Eτe are the unified location embeddings; hQC t and hPR t are the LSTM hidden state vectors of the query constructor and the POI recommender, respectively”
Zhou inputs an ordered sequence of segment-derived representations into an LSTM, which is a sequence model designed to capture temporal dependencies among trajectory elements. The use of an explicit time index t and a time/length embedding r_t concatenated at each step provides a corresponding to the time identifier for each segment when inputting the sequence into the model.),
to output a sequence representation of the target sample trajectory (Page 11 Section 4.1.3, “The planning will iterate until the complete trajectory is generated, i.e., when the length of the generated trajectory |T | meets the number of desired attractions N. As shown in Figure 1, the hierarchical response generator consists of two parts: a query constructor and a POI recommende. The former aims to model the existing trajectories and generate the corresponding hidden state representation by utilizing recurrent neural networks. r”, Page 12 Section 4.1.4, “Once the hierarchical response generator produced enough tour hidden state vectors…”
Zhou’s hierarchical response generator iteratively produces a sequence of hidden state vectors using an RNN until the complete trajectory is generated. This forms a sequence-level representation of the target sample trajectory. The reference’s disclosure of “produced enough tour hidden state vectors” confirms that the output is an ordered sequence representation corresponding to the entire trajectory.)
Zhou does not teach a total duration exceeding a predetermined value in the sample set according to a predetermined time interval.
Li, in the same field of endeavor, teaches with a total duration exceeding a predetermined value in the sample set according to a predetermined time interval (Last Paragraph of Page 16 and Paragraph 2 and 4 of Page 17, “For example, the attribute information of the communication connection comprises …for the AP belongs to the target communication connection, filter condition comprises… the duration of the target communication connection is greater than the specified time threshold.”
Li teaches that each sample includes connection information, which comprises the duration of the communication connection. Li filters these samples so that only connections whose duration exceeds a threshold are included, ensuring that the sample set only contains sample meeting the predetermined duration criteria.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the teaching of Zhou’s CTLTR framework for learning trajectory embeddings, which divides each trajectory into segments, generates segment-level representations, and inputs an ordered sequence of segment representations with time identifiers into a sequence model to produce a sequence-level trajectory representation, with Li’s teaching of filtering a sample set to include only connections whose duration exceeds a pre-determined value in order to use the filtered samples for subsequent processing (Paragraph 3 of Page 15 of Li).
Regarding claim 4,
Zhou teaches outputting the sequence representation of each target sample trajectory by a prediction model, to obtain a prediction result of the each target sample trajectory (Page 8 Section 3.1, “INPUT: A user-provided query consisting of the desired start point ls and start time ts , the length of the trip N (i.e., the number of POIs to visit), and the end point le at time te . OUTPUT: The tour recommender system returns a tour route T.”, Page 11 Section 4.1.3Hierachical Response Generator, “According to the current prediction along with previous predicted POIs, our model adjusts the query and carries out the next round of POI planning. The planning will iterate until the complete trajectory is generated, i.e., when the length of the generated trajectory |T | meets the number of desired attractions N.”);
Zhou does not teach adjusting a network parameter of the sequence model according to a difference between the prediction result of the each target sample trajectory and a tag corresponding to the each target sample trajectory.
Li, in the same field of endeavor, teaches adjusting a network parameter of the sequence model according to a difference between the prediction result of the each target sample trajectory and a tag corresponding to the each target sample trajectory (Paragraph 3 of Page 19, “…the AP prediction model training mode is as follows: repeatedly executing… until the loss value corresponding to the loss function (loss function) is converged… the training process of the AP prediction model comprises: performing forward calculation to the initial AP prediction model based on a plurality of historical roaming path information and preset parameter set, obtaining the output data of the initial AP prediction model; based on the output data and the expected output, updating the parameter set of the initial AP prediction model by means of reverse transmission. the initial AP prediction model is the initial architecture of the AP prediction model, the expected output can be calculated based on the feature data in a plurality of historical roaming path information.”, Paragraph 8 of Page 3, “for each history roaming path information… obtaining the output data of the initial AP prediction model… the output data comprises a predicted second history roaming path… the second AP is the last AP”
Li discloses performing forward calculations to predict the next AP for each historical roaming path and then updating the AP prediction model’s parameters using reverse transmission based on the difference between the predicted AP and the actual next AP (the tag).).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the teaching of Zhou’s teaching of outputting a sequence representation of each target sample trajectory using a prediction model with Li’s teaching of updating a model’s network parameters based on the difference between the predicted next AP and the actual next AP (tag) in order to enable the sequence model in Zhou’s framework to improve prediction accuracy by adjusting its parameters according to the prediction error for each trajectory (Paragraph 6 on Page 7 of Li).
Regarding claim 5,
Zhou does not teach the tag comprises at least one of: a path category tag, an abnormal event tag, a next position tag, or a schedule tag.
Li, in the same field of endeavor, teaches the tag comprises at least one of: a path category tag, an abnormal event tag, a next position tag, or a schedule tag (Last Paragraph of Page 3, “the expected output (i.e., tag) comprises the actual roaming AP of the second AP”, Paragraph 2 of Page 20, “the label is AP2…the tag is AP3… the tag is AP4.”
Li uses each historical roaming path (a sequence of Aps, where an AP is an access point for providing network service to a mobility device) as input and the next AP in the sequence as the tag, thereby pairing each trajectory with its next position for supervised learning.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the teaching of Zhou’s CTLTR framework with Li’s teaching that the tag comprises a next position tag in order to provided labeled trajectory data that can guide supervised training and enable the model to predict meaningful outcomes for each trajectory segment (Paragraph 2 on Page 19 of Li).
Claim 10 is an apparatus corresponding to method claim 6 and is rejected using the same rationale as claim 2.
Claim 11 is an apparatus corresponding to method claim 7 and is rejected using the same rationale as claim 3.
Claim 12 is an apparatus corresponding to method claim 8 and is rejected using the same rationale as claim 4.
Claim 13 is an apparatus corresponding to method claim 8 and is rejected using the same rationale as claim 5.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAJD MAHER HADDAD whose telephone number is (571)272-2265. The examiner can normally be reached Mon-Friday 8-5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar, can be reached at (571) 272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.M.H./Examiner, Art Unit 2125
/KAMRAN AFSHAR/Supervisory Patent Examiner, Art Unit 2125