Prosecution Insights
Last updated: April 19, 2026
Application No. 18/742,040

Span Pointer Networks for Non-Autoregressive Task-Oriented Semantic Parsing for Assistant Systems

Non-Final OA §103
Filed
Jun 13, 2024
Examiner
TRACY JR., EDWARD
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Meta Platforms Inc.
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
81 granted / 105 resolved
+15.1% vs TC avg
Strong +36% interview lift
Without
With
+35.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
26 currently pending
Career history
131
Total Applications
across all art units

Statute-Specific Performance

§101
20.3%
-19.7% vs TC avg
§103
71.9%
+31.9% vs TC avg
§102
3.7%
-36.3% vs TC avg
§112
3.7%
-36.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 105 resolved cases

Office Action

§103
DETAILED ACTION Introduction 1. This office action is in response to Applicant’s submission filed on 9/17/2024. Claims 2-21 are pending in the application and have been examined. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement 3. The information disclosure statements (IDSs) submitted on 8/14/2024 in compliance with the provisions of 37 CFR 1.97. Accordingly, all the documents in the information disclosure statements are being considered by the examiner. Claim Rejections - 35 USC § 103 4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. Claims 2, 3, 8, and 12-15, 17, 18, 20, and 21 are rejected under 35 U.S.C. 103 as unpatentable over US Pat. App. Pub. No. 20210374603 (Xia et al., hereinafter “Xia”) in view of US Pat. App. Pub. No. 20230306202 (Osugi et al., hereinafter “Osugi”). With regard to Claim 2, Xia describes: “A method comprising, by one or more computing systems: receiving, from a client system, a user input comprising a plurality of input tokens; (Paragraph 36 describes that an input utterance is converted to a sequence of tokens.) generating a span-based frame representation based on the plurality of input tokens, the span-based frame representation comprising one or more intents, (Paragraph 40 describes that intents 406 are determined form the tokens.) one or more slots, (Paragraph 94 describes that slots may be determined from the tokens.) and a span, (Paragraph 40 describes a start of sequence token and an end of sequence token, which defines the span.) wherein the span comprises a first index endpoint associated with a first token of the plurality of input tokens and a second index endpoint associated with a second token of the plurality of input tokens; (Paragraph 40 describes a start of sequence token and an end of sequence token) executing, responsive to the user input, one or more tasks based on the span-based frame representation. (Paragraph 91 describes that a classification task may be performed based on the input.) Xia does not explicitly describe: “encoding, based on an encoder of a natural language understanding module, the user input to generate a feature vector for the user input; determining, by a length module of the natural language understanding module, a length of the span-based frame representation based on the feature vector for the user input, wherein generating the span-based frame representation is further based on the length of the span-based frame representation.” However, Osugi describes: “encoding, based on an encoder of a natural language understanding module, the user input to generate a feature vector for the user input; (Paragraph 42 describes that the device determines short term feature vectors.) determining, by a length module of the natural language understanding module, a length of the span-based frame representation based on the feature vector for the user input, wherein generating the span-based frame representation is further based on the length of the span-based frame representation.” (Paragraph 69 describes that a length is determined based on the feature vector generated based on user input.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the long-term feature generation as described by Osugi into the device of Xia to allow building a long-term output, as described at paragraph 69 of Osugi. With respect to Claim 3, Xia describes “the user input is based on an utterance by a user of the client system.” Paragraph 36 describes that a user utterance is received as an input. With respect to Claim 8, Xia describes “determining the length of the span-based frame representation is further based on one or more hidden states associated with the encoder.” (Paragraph 45 describes that hidden states may be used.) With respect to Claim 12, Xia describes “identifying a subset of the plurality of input tokens based on the first index endpoint and the second index endpoint, (Paragraph 40 describes a start of sequence token and an end of sequence token, which defines the span.) wherein executing the one or more tasks is further based on the subset of input tokens. (Paragraph 91 describes that a classification task may be performed based on the input.) With respect to Claim 13, Xia describes “parsing the user input is based on a sequence-to-sequence model.” Paragraph 81 describes that a language model such as BERT may be used, which is a sequence-to-sequence model. With respect to Claim 14, Xia describes “identifying one or more input tokens of the plurality of input tokens based on the first index endpoint and the second index endpoint; (Paragraph 40 describes a start of sequence token and an end of sequence token, which defines the span.) and swapping the span of the span-based frame representation with the identified input tokens to generate a canonical frame representation.” (Paragraph 40 describes that the resulting input sequence 407 (cited as “a canonical frame representation”) is based on the start of sequence token, the end of sequence token, and selected input tokens.) With respect to Claim 15, Xia describes “executing the one or more tasks is further based on the canonical frame representation.” Paragraph 91 describes that a take may be performed based on input sequence 407. With respect to Claims 17 and 18, computer readable medium Claim 17 and method Claim 1 are related as a computer readable medium programmed to perform the same method, with each claimed function corresponding to each claimed method step. Further, paragraph 33 of Xia describes that the method can be practiced with a computer including a magnetic or optical medium. Accordingly, Claims 17 and 18 are similarly rejected under the same rationale as applied above with respect to Claims 1 and 12. With respect to Claims 20 and 21, system Claim 20 and method Claim 1 are related as a system programmed to perform the same method, with each claimed system function corresponding to each claimed method step. Further, paragraph 33 of Xia describes that the method can be practiced with a computer including a magnetic or optical medium. Accordingly, Claims 20 and 21 are similarly rejected under the same rationale as applied above with respect to Claims 1 and 12. 6. Claims 4, 5, 16, and 19 are rejected under 35 U.S.C. 103 as unpatentable over Xia in view of Osugi and US Pat. App. Pub. No. 20200311199 (Yan et al., hereinafter “Yan”). With respect to Claim 4, Xia in view of Osugi does not explicitly describe this subject matter. However, Yan describes “generating, by an automatic speech recognition module, a transcription of the utterance, wherein the transcription comprises the plurality of input tokens.” Paragraph 22 describes that a text transcript can be generated based on the input audio. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the text transcript as described by Yan into the device of Xia in view of Osugi to allow for different types of input, as described at paragraph 22 of Yan. With respect to Claim 5, Xia describes “parsing the user input to determine a plurality of [[ontology]] tokens and a plurality of utterance tokens corresponding to the plurality of input tokens; (Paragraph 40 describes that the tokens are categorized to intent tokens and utterance tokens.) decoding the [[ontology]] tokens and the utterance tokens to generate a span-based frame representation comprising one or more intents, (Paragraph 40 describes that intents 406 are determined form the tokens.) one or more slots, (Paragraph 94 describes that slots may be determined from the tokens.) and a span (Paragraph 40 describes a start of sequence token and an end of sequence token, which defines the span.) comprising one or more tokens of the plurality of input tokens.” Xia in view of Osugi does not explicitly describe “ontology tokens”. However, paragraph 21 of Yan describes that ontology tokens may be used to create a knowledge graph. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the ontology tokens as described by Yan into the device of Xia in view of Osugi to allow output in natural language, as described at paragraph 22 of Yan. With respect to Claim 16, Xia in view of Osugi does not explicitly describe this subject matter. However, Yan describes “sending, to the client system, instructions for presenting a response generated based on the execution results of the one or more tasks.” Paragraph 47 describes that the output of the model may be a text or audio output from the natural language model (the output of the natural language model in Xia is cited as “one or more tasks.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the response as described by Yan into the device of Xia in view of Osugi to provide helpful natural language responses to a user, as described at paragraph 47 of Yan. With respect to Claim 19, computer readable medium Claim 17 and method Claim 1 are related as a computer readable medium programmed to perform the same method, with each claimed function corresponding to each claimed method step. Further, paragraph 33 of Xia describes that the method can be practiced with a computer including a magnetic or optical medium. Accordingly, Claim 19 is similarly rejected under the same rationale as applied above with respect to Claims 2-4. 7. Claim 9 is rejected under 35 U.S.C. 103 as unpatentable over Xia in view of Osugi and US Pat. App. Pub. No. 20190122655 (Min et al., hereinafter “Min”). With respect to Claim 9, Xia in view of Osugi does not explicitly describe this subject matter. However, Min describes “generating, based on the length of the span-based frame representation, a plurality of mask tokens, wherein a number of the plurality of mask tokens equals the determined length.” Paragraph 15 describes the use of an MLP network. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the response as described by Yan into the device of Xia in view of Osugi to avoid using complex learning structures, as described at paragraph 15 of Min. Allowable Subject Matter 8. Claim 7, 10, and 11 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Reasons for Allowance 9. The cited art does not teach or suggest “the length module is trained based on a length loss optimizing a negative log likelihood loss between ground-truth length-frame tuples and predicted length-frame tuples” as recited in Claim 7, or “generating, based on the length of the span-based frame representation, a plurality of mask tokens, wherein a number of the plurality of mask tokens equals the determined length” as recited in Claim 10, in combination with the features defined in Claim 2. Conclusion 10. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US Pat. App. Pub. No. 20190066668 (Lin et al., hereinafter “Lin”) also describes the use of MLP structures. 11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWARD TRACY whose telephone number is (571)272-8332. The examiner can normally be reached Monday-Friday 9 AM- 5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached on 571-272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDWARD TRACY JR./Examiner, Art Unit 2656 /BHAVESH M MEHTA/Supervisory Patent Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Jun 13, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566969
METHOD AND APPARATUS FOR TRAINING MACHINE READING COMPREHENSION MODEL, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM
2y 5m to grant Granted Mar 03, 2026
Patent 12561524
TRAINING MACHINE LEARNING MODELS TO AUTOMATICALLY DETECT AND CORRECT CONTEXTUAL AND LOGICAL ERRORS
2y 5m to grant Granted Feb 24, 2026
Patent 12548552
DYNAMIC LANGUAGE SELECTION OF AN AI VOICE ASSISTANCE SYSTEM
2y 5m to grant Granted Feb 10, 2026
Patent 12548554
SYSTEM AND METHOD FOR ACTIVE LEARNING BASED MULTILINGUAL SEMANTIC PARSER
2y 5m to grant Granted Feb 10, 2026
Patent 12536374
METHOD FOR CONSTRUCTING SENTIMENT CLASSIFICATION MODEL BASED ON METAPHOR IDENTIFICATION
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+35.7%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 105 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month