Prosecution Insights
Last updated: April 19, 2026
Application No. 18/761,485

SYSTEMS AND METHODS FOR IDENTIFYING CONTENT CORRESPONDING TO A LANGUAGE SPOKEN IN A HOUSEHOLD

Non-Final OA §DP
Filed
Jul 02, 2024
Examiner
OPSASNICK, MICHAEL N
Art Unit
2658
Tech Center
2600 — Communications
Assignee
Adeia Guides Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
92%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
737 granted / 900 resolved
+19.9% vs TC avg
Moderate +10% lift
Without
With
+10.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
46 currently pending
Career history
946
Total Applications
across all art units

Statute-Specific Performance

§101
17.7%
-22.3% vs TC avg
§103
33.0%
-7.0% vs TC avg
§102
29.9%
-10.1% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 900 resolved cases

Office Action

§DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification Content of Specification (b) CROSS-REFERENCES TO RELATED APPLICATIONS: See 37 CFR 1.78 and MPEP § 211 et seq. The Cross-Reference section, needs updating, since the 18/212610 application has now issued as a U.S.Patent. Correction is required. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 51-70 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 11,721,321. Although the claims at issue are not identical, they are not patentably distinct from each other because the variation of the claimed aggregate durations between the claims, overlap in scope and content. See mapping below: 18/761485 11,721,321 51. A method for identifying content corresponding to a language, the method comprising: receiving, via control circuitry, a plurality of verbal inputs over a time period; determining, using voice recognition, a first aggregate duration of the plurality of verbal inputs corresponding to a first language; determining, using voice recognition, a second aggregate duration of the plurality of verbal inputs corresponding to a second language; and providing, based at least in part on the determining the first aggregate duration exceeding the second aggregate duration, a first subset of a plurality of content items corresponding to the first language. 52. The method of claim 51 further comprising: providing a second subset of the plurality of content items corresponding to the second language, wherein the second subset is smaller than the first subset. 53. The method of claim 52 further comprising: determining, using voice recognition, a third aggregate duration of the plurality of verbal inputs corresponding to a third language; and providing a third subset of the plurality of content items corresponding to the third language, wherein the third subset is smaller than the second subset. 54. The method of claim 52 further comprising: receiving input via a user interface selecting a content item from the second subset of the plurality of content items corresponding to the second language; storing data indicating that the second language is preferred; and providing a second subset of the plurality of content items corresponding to the second language, wherein the second subset is larger than the first subset. 55. The method of claim 51 further comprising: receiving input via a user interface selecting the second language as preferred; and providing a second subset of the plurality of content items corresponding to the second language, wherein the second subset is larger than the first subset. 56. The method of claim 51, further comprising: identifying popular content items within the first subset corresponding to the first language; and prioritizing the popular content items within the provided first subset. 57. The method of claim 56, further comprising: determining a second subset of the plurality of content items corresponding to the second language; identifying a second set of popular content items within the second subset corresponding to the second language; and prioritizing the second set of popular content items ahead of at least one of the provided first subset but behind the popular content items within the provided first subset. 58. The method of claim 51, wherein each content item in the first subset of the plurality of content items corresponding to the first language is identified using a phoneme database. 59. The method of claim 51, wherein the providing the first subset of the plurality of content items further comprises providing an advertisement corresponding to the first language. 60. The method of claim 51, wherein the providing the subset of the plurality of content items comprises: identifying a plurality of content sources corresponding to the first language; and providing the subset of the plurality of content items from at least one of the plurality of content sources. 61. A system for identifying content corresponding to a language, the system comprising: memory; control circuitry configured to: receive a plurality of verbal inputs over a time period; determine, using voice recognition, a first aggregate duration of the plurality of verbal inputs corresponding to a first language, wherein the first aggregate duration is stored in the memory; determine, using voice recognition, a second aggregate duration of the plurality of verbal inputs corresponding to a second language, wherein the second aggregate duration is stored in the memory; and provide, based at least in part on the determining the first aggregate duration exceeds the second aggregate duration, a first subset of a plurality of content items corresponding to the first language. 62. The system of claim 61, wherein the control circuitry is further configured to provide a second subset of the plurality of content items corresponding to the second language, wherein the second subset is smaller than the first subset. 63. The system of claim 62, wherein the control circuitry is further configured to: determine, using voice recognition, a third aggregate duration of the plurality of verbal inputs corresponding to a third language; and provide a third subset of the plurality of content items corresponding to the third language, wherein the third subset is smaller than the second subset. 64. The system of claim 62, wherein the control circuitry is further configured to: receive input via a user interface selecting a content item from the second subset of the plurality of content items corresponding to the second language; store data in the memory indicating that the second language is preferred; and provide a second subset of the plurality of content items corresponding to the second language, wherein the second subset is larger than the first subset. 65. The system of claim 61, wherein the control circuitry is further configured to: receive input via a user interface selecting the second language as preferred; and provide a second subset of the plurality of content items corresponding to the second language, wherein the second subset is larger than the first subset. 66. The system of claim 61, wherein the control circuitry is further configured to: identify popular content items within the first subset corresponding to the first language; and prioritize the popular content items within the provided first subset. 67. The system of claim 66, wherein the control circuitry is further configured to: determine a second subset of the plurality of content items corresponding to the second language; identify a second set of popular content items within the second subset corresponding to the second language; and prioritize the second set of popular content items ahead of at least one of the provided first subset but behind the popular content items within the provided first subset. 68. The system of claim 61, wherein each content item in the first subset of the plurality of content items corresponding to the first language is identified by the control circuitry using a phoneme database. 69. The system of claim 61, wherein the control circuitry is further configured to provide the first subset of the plurality of content items with an advertisement corresponding to the first language. 70. The system of claim 61, wherein the control circuitry is further configured to provide the subset of the plurality of content items by: identifying a plurality of content sources corresponding to the first language; and providing the subset of the plurality of content items from at least one of the plurality of content sources. 1. A method for identifying content corresponding to a language, the method comprising: receiving a plurality of verbal inputs over a time period; automatically determining, using voice recognition circuitry, respective languages of the plurality of verbal inputs; determining, for each of the respective languages, an aggregate duration of the plurality of verbal inputs received over the time period that correspond to the language; generating for display a list of a plurality of the respective languages, wherein the plurality of the respective languages is ordered based on the determined aggregate durations; receiving user input identifying a language of the plurality of the respective languages; in response to receiving the user input, identifying content in the identified language; and generating for display a representation of the identified content. 2. The method of claim 1, further comprising: in response to receiving the user input identifying the language: searching a database of content sources to identify a content source that transmits the content in the identified language; and generating for display a representation of the identified content source. 3. The method of claim 2, wherein the representation of the identified content source includes a channel name or number of the identified content source. 4. The method of claim 2, wherein the representation of the identified content source is included in an overlay on top of content currently being generated for display. 5. The method of claim 2, further comprising: retrieving a subscription plan from a storage device; determining that the identified content source that transmits language content in the identified language is not included in the retrieved subscription plan prior to generating for display the representation of the identified content source; cross-referencing the database of content sources to identify a second content source associated with a language field value that corresponds to the identified language; and generating for display a representation of the identified second content source. 6. The method of claim 1, further comprising storing the determined aggregate durations in a memory in association with the languages, respectively. 7. The method of claim 6, further comprising: for at least one of the languages, determining an updated aggregate duration of verbal inputs received that correspond to the language; and storing the updated aggregate duration in association with the language in the memory by overwriting the aggregate duration stored in association with the language. 8. The method of claim 7, further comprising: receiving input selecting a second language of the plurality of the respective languages; and in response to receiving the input selecting the second language: identifying updated content in the second language; and generating for display a representation of the identified updated content. 9. The method of claim 1, wherein generating for display the list of the plurality of respective languages comprises identifying a language spoken most often and placing the language spoken most often at a top of the list. 10. The method of claim 9, wherein identifying the language spoken most often comprises determining a greatest aggregate duration among the determined aggregate durations. 11. A system for identifying content corresponding to a language, the system comprising: control circuitry; and voice recognition circuitry, wherein the control circuitry is configured to receive a plurality of verbal inputs over a time period, wherein the voice recognition circuitry is configured to automatically determine respective languages of the plurality of verbal inputs, and wherein the control circuitry is further configured to: determine, for each of the respective languages, an aggregate duration of the plurality of verbal inputs received over the time period that correspond to the language; generate for display a list of a plurality of the respective languages, wherein the plurality of the respective languages is ordered based on the determined aggregate durations; receive user input identifying a language of the plurality of the respective languages; in response to receiving the user input, identify content in the identified language; and generate for display a representation of the identified content. 12. The system of claim 11, wherein the control circuitry is further configured to: in response to receiving the user input identifying the language: search a database of content sources to identify a content source that transmits content in the identified language, and generate for display a representation of the identified content source. 13. The system of claim 12, wherein the representation of the identified content source includes a channel name or number of the identified content source. 14. The system of claim 12, wherein the representation of the identified content source is included in an overlay on top of content currently being generated for display. 15. The system of claim 12, further comprising storage circuitry, wherein the control circuitry is further configured to: retrieve a subscription plan from the storage circuitry; determine that the identified content source that transmits language content in the identified language is not included in the retrieved subscription plan prior to generating for display the representation of the identified content source; cross-reference the database of content sources to identify a second content source associated with a language field value that corresponds to the identified language; and generate for display a representation of the identified second content source. 16. The system of claim 11, further comprising storage circuitry, wherein the storage circuitry is configured to store the determined aggregate durations in association with the languages, respectively. 17. The system of claim 16, wherein the control circuitry is further configured to, for at least one of the languages, determine an updated aggregate duration of verbal inputs received that correspond to the language, and wherein the storage circuitry is further configured to store the updated aggregate duration in association with the language by overwriting the aggregate duration stored in association with the language. 18. The system of claim 17, wherein the control circuitry is further configured to: receive input selecting a second language of the plurality of the respective languages; and in response to receiving the input selecting the second language: identify updated content in the second language; and generate for display a representation of the identified updated content. 19. The system of claim 11, wherein the control circuitry is further configured to generate for display the list of the plurality of respective languages comprises identifying a language spoken most often and placing the language spoken most often at a top of the list. 20. The system of claim 19, wherein the control circuitry is further configured to identify the language spoken most often by determining a greatest aggregate duration among the determined aggregate durations. Allowable Subject Matter Claims 51-70 are allowable over the prior art of record. The following is a statement of reasons for the indication of allowable subject matter: As per the independent claims, the claim limitations towards measuring durations, and aggregate durations, of speech recognized verbal inputs, and differentiating which chunks/sections belong to a certain language, and using the aggregate durations to delineate between a first and second language, is not explicitly taught by the prior art of record. With respect to the prior art of record, Yongsin (8,521,531) teaches the determination of a related word in a time-based recent related audio frame to the speech query Fig. 6). Fructuoso (20150095018) teaches mapping of first language sounds into second language representations (Fig. 1a). Maskatia (20130066464) teaches the search/finding of comparable material that the user request, and providing the substitute to the user (Fig. 10). Washio (20130173267) teaches the measurement of time positions and durations between 2 speakers (Fig. 4), and calculating a probability of the category of the speech segment (abstract). However, none of the prior art of record, alone or in combination, explicitly teaches the above features listed, and found in the independent claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please see related art listed on the PTO-892 form. Furthermore, the following references were found, with relevant features: Yongsin (8,521,531) teaches the determination of a related word in a time-based recent related audio frame to the speech query Fig. 6). Fructuoso (20150095018) teaches mapping of first language sounds into second language representations (Fig. 1a). Maskatia (20130066464) teaches the search/finding of comparable material that the user request, and providing the substitute to the user (Fig. 10). Washio (20130173267) teaches the measurement of time positions and durations between 2 speakers (Fig. 4), and calculating a probability of the category of the speech segment (abstract). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael Opsasnick, telephone number (571)272-7623, who is available Monday-Friday, 9am-5pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Mr. Richemond Dorvil, can be reached at (571)272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /Michael N Opsasnick/Primary Examiner, Art Unit 2658 1/19/2026
Read full office action

Prosecution Timeline

Jul 02, 2024
Application Filed
Jan 19, 2026
Non-Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602554
SYSTEMS AND METHODS FOR PRODUCING RELIABLE TRANSLATION IN NEAR REAL-TIME
2y 5m to grant Granted Apr 14, 2026
Patent 12592246
SYSTEM AND METHOD FOR EXTRACTING HIDDEN CUES IN INTERACTIVE COMMUNICATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12586580
System For Recognizing and Responding to Environmental Noises
2y 5m to grant Granted Mar 24, 2026
Patent 12579995
Automatic Speech Recognition Accuracy With Multimodal Embeddings Search
2y 5m to grant Granted Mar 17, 2026
Patent 12567432
VOICE SIGNAL ESTIMATION METHOD AND APPARATUS USING ATTENTION MECHANISM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
92%
With Interview (+10.5%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 900 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month