Prosecution Insights
Last updated: April 17, 2026
Application No. 18/649,573

ANONYMOUS REAL-TIME CUSTOMER FEEDBACK SYSTEM

Non-Final OA §103
Filed
Apr 29, 2024
Examiner
CRESPO FEBLES, HECTOR J
Art Unit
2657
Tech Center
2600 — Communications
Assignee
unknown
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
5 currently pending
Career history
5
Total Applications
across all art units

Statute-Specific Performance

§101
16.7%
-23.3% vs TC avg
§103
58.3%
+18.3% vs TC avg
§102
25.0%
-15.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The disclosure is objected to because of the following informalities: In the description of the drawings the items 203 and 204 from figure are not referenced or explained in the specification. Item 203 and 204 in the explanation of figure 2: The Machine learning module and the CPU, program memory, RAM memory, mass storage module are not pointed out in the specification. In paragraph [00011] there is a missing space between the word “Figure” and the number that follows it. The sentence reads: “Figure1 shows…” when it should read “Figure 1 shows…”. The conjunction “But” is observed followed by a comma which implies a halt in the flow of the sentence. That halt is unnecessary with exception of the word “But” is followed by a parenthetical element. In this particular case the comma is not needed: In paragraph [0005] reads: “But, many potential feedback providers will balk because…” when it should read “But many potential feedback providers will balk because…”. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1 to 3 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Shun Yin CHAU (US 20230107269 A1), hereinafter CHAU, in view of John A. Brussolo (US 3582628 A), hereinafter Brussolo. In regards to claim 1, CHAU discloses a similar system that consist of: a plurality of microphones – “A recommender system using an edge computing platform to process concurrently multiple continuous audio streams of conversations from customers in a business establishment, and provide real-time recommendations instantly and on-the-fly in the business establishment including multiple microphone devices installed in the business establishment for simultaneously collecting and recording multiple continuous audio streams of conversations from customers in the business establishment;” (CHAU: [Abstract]); a natural-language-model module subsystem – “In step 3, the edge computing technology is applied in that an edge device machine is installed in the store for processing the voices collected in step 2. Deep learning models and algorithms for voice processing are pre-installed in the edge device, and applied in this step. Specifically, the deep learning models and algorithms perform the following tasks: Automatic Speech Recognition and Natural Language Processing.” (CHAU: [0032]); a machine-learning-module subsystem – “In step 3, the edge computing technology is applied in that an edge device machine is installed in the store for processing the voices collected in step 2. Deep learning models and algorithms for voice processing are pre-installed in the edge device, and applied in this step. Specifically, the deep learning models and algorithms perform the following tasks: Automatic Speech Recognition and Natural Language Processing.” (CHAU: [0032]); and a processing subsystem comprising: a central processing unit; non-volatile program memory; read/write data memory; and mass storage. – “A recommender system using an edge computing platform to process concurrently multiple continuous audio streams of conversations from customers in a business establishment, and provide real-time recommendations instantly and on-the-fly in the business establishment including multiple microphone devices installed in the business establishment for simultaneously collecting and recording multiple continuous audio streams of conversations from customers in the business establishment;” (CHAU: [Abstract]); Note that the concept of computing platform inherently includes a CPU, Ram memory and storage since they are essential for a functional computing platform, further explained as “After the microphones are installed, conversations are collected for processing by an edge device machine that is provided within the store as shown in FIG. 3. The present disclosure applies the technology from the edge computing framework. Human speech will be processed by deep learning models and algorithms for voice processing. These models are pre-installed and run on the edge device machine.” (CHAU: [0040]), where such capabilities of running the preinstalled models inherently teaches a program stored in a memory and a central processing unit and a non-volatile program memory executing the operation. Likewise, “FIG. 4 shows the edge device being connected to the store database that contains products, service names and tags, brand names and category names and tags, price ranges and related tags, product names, attributes, properties and tags, other tags, and other database entities. The edge device can access the database for the products, brands, services, categories, and their corresponding tags.” (CHAU: [0042]) inherently teaches a mass storage medium where the database is stored. However, CHAU does not disclose a multiplexor with analog-to-digital converter subsystem since they do not explain the hardware that they use to manage their array of microphone inputs. Brussolo teaches a multiplexer-ADC wherein: “The multiplexer-ADC converter is capable of operating in three different modes, including (1) a "digitize" mode in which the converter repetitively converts whatever analog signal is being addressed, (2) a "random" mode in which the converter converts the value then addressed upon receipt of a "convert" command and then stops, and (3) a "sequential" mode, in which the multiplexer is stepped to the next one of its switch positions, converts the value sampled upon receipt of a "convert" command, and then stops.” (Brussolo: column 18 line 29 – line 37). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include a multiplexer and an analog to digital converter in between the microphone array and the Natural Language Model because of the burden that hearing and interpreting the input from a plurality of microphones represents and because the signal captured, voice, is an analog signal that propagates in an analog medium and the NLM is a digital model and analog to digital conversion is required. In regards to claim 2, the combination of CHAU and Brussolo teaches what is inherited from claim 1, Brussolo further teaches wherein the multiplexor is operative to sequentially switch microphone inputs, at predetermined times, in a predetermined order - “The multiplexer-ADC converter is capable of operating in three different modes, including (1) a "digitize" mode in which the converter repetitively converts whatever analog signal is being addressed, (2) a "random" mode in which the converter converts the value then addressed upon receipt of a "convert" command and then stops, and (3) a "sequential" mode, in which the multiplexer is stepped to the next one of its switch positions, converts the value sampled upon receipt of a "convert" command, and then stops.” (Brussolo: column 18 line 29 – line 37). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the disclosure of Brussolo because of the reasons identified above with respect to claim 1 and because a multiplexer requires a signal to address which input will be transferred to the output and such signal could be sequential or random depending on the application. In regards to claim 3, the combination of CHAU and Brussolo teaches what is inherited from claim 1, Brussolo further teaches wherein the multiplexor is operative to sequentially switch microphone inputs, at predetermined times, in a random order - “The multiplexer-ADC converter is capable of operating in three different modes, including (1) a "digitize" mode in which the converter repetitively converts whatever analog signal is being addressed, (2) a "random" mode in which the converter converts the value then addressed upon receipt of a "convert" command and then stops, and (3) a "sequential" mode, in which the multiplexer is stepped to the next one of its switch positions, converts the value sampled upon receipt of a "convert" command, and then stops.” (Brussolo: column 18 line 29 – line 37). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the disclosure of Brussolo because of the reasons identified above with respect to claim 1 and because a multiplexer requires a signal to address which input will be transferred to the output and such signal could be sequential or random depending on the application. In regards to claim 6, the combination of CHAU and Brussolo teaches what is inherited from claim 1. CHAU further teaches wherein the natural-language model module is operative to receive the digital equivalent signal and translate and interpret it based on predefined, business-specific, words and phrases. - “In the second phase, recommendations are provided back to the customers, as shown in FIGS. 1 and 5. The edge machine processes the text that was generated earlier. Text mining algorithms are applied to detect the most frequent words related to clothing industry as well as the discussed topics in the conversations. Thereafter, the edge machine searches within the store database, as shown in FIG. 4, for any similar items, such as products, brands, discounts that are related with the conversations of the customers. Matching of similar keywords between the converted text from voice and the elements stored in the database are detected. At the end, similar products, brands and offers are shown to the customers on large digital screens that are placed in various locations inside the store, as shown in FIGS. 1 and 5.” (CHAU: [0045]). Claims 4 and 5 are rejected under 35 U.S.C. 103 as being unpatentable over CHAU and Brussolo, in view of Abramson et al. (US 3718768 A), hereinafter Abramson. In regards to claim 4, the combination of CHAU and Brussolo teaches what is inherited from claim 1, Abramson further teaches wherein the microphone output is an analog voice signal. – “FIG. 1A is a simplified functional block diagram of the transmitting portion of one station and the receiving portion of another station in a voice communications system illustrative of the system. The system generally includes an Acoustic Energy to Electrical Energy Transducer 10, such as a microphone, which converts voice energy into an analog electrical signal. The analog voice signal is continuously presented along line 12 to an Analog-to-Digital Encoder 14 that converts the analog signal into digital code numbers and presents them over lines 16 to a Transmit Interface 18.” (Abramson: Column 5 line 8 – line 19). For one of ordinary skill in the art before the effective filing date of the claimed invention it would have been obvious to identify the microphone as “an Acoustic Energy to Electrical Energy Transducer 10, such as a microphone, which converts voice energy into an analog electrical signal.” (Abramson column 5 line 12 – line 14). According to the teachings of Abramson, the microphone receives an input voice signal, that signal is captured by such microphone and it is converted into an analog voice signal at the output of the microphone. Therefore, the resulting analog voice signal would be considered a predictable result of the addition of a microphone and it will merely perform the same function as it would have done separated from the invention. In regards to claim 5, the combination of CHAU and Brussolo teaches what is inherited from claim 1, Abramson further teaches wherein the analog voice signal is converted to a digital equivalent signal by the analog-to-digital convertor. – “FIG. 1A is a simplified functional block diagram of the transmitting portion of one station and the receiving portion of another station in a voice communications system illustrative of the system. The system generally includes an Acoustic Energy to Electrical Energy Transducer 10, such as a microphone, which converts voice energy into an analog electrical signal. The analog voice signal is continuously presented along line 12 to an Analog-to-Digital Encoder 14 that converts the analog signal into digital code numbers and presents them over lines 16 to a Transmit Interface 18.” (Abramson: Column 5 line 8 – line 19). Where the digital code numbers are the digital equivalent of the analog voice signal converted by an analog to digital converter and encoder. For one of ordinary skill in the art before the effective filing date of the claimed invention it would have been obvious that in order to process an analog signal, such as voice, acquired from an analog medium into a digital computer, the signal needs to be converted into a digital equivalent signal. CHAU teaches various instances where the voice input is processed by a digital computing platform into a digital representation of that voice signal. For example, “Amongst the many functionalities, the edge machine processes multiple continuous streams of human voices that are collected from inside the store pertaining to, e.g., speech audio streams, using deep learning models for speech recognition. Also, the edge machine provides recommendations to customers inside the store by applying cutting edge recommendation algorithms and models.” (CHAU: [0020]) as well as “After the microphones are installed, conversations are collected for processing by an edge device machine that is provided within the store as shown in FIG. 3. The present disclosure applies the technology from the edge computing framework. Human speech will be processed by deep learning models and algorithms for voice processing. These models are pre-installed and run on the edge device machine. Specifically, the deep learning models and algorithms perform tasks similar to Automatic Speech Recognition, Natural Language Processing and Voice Dictation. As described above, these models process the recorded voice recordings to text, and determine correlations between words and phrases.” (CHAU: [0040]). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over CHAU and Brussolo, in view of Euihark Lee (US 20230377005 A1) hereinafter Lee. In regards to claim 7, the combination of CHAU and Brussolo teaches what is inherited from claim 1, CHAU further teaches wherein the machine-learning module is operative to receive the translated and interpreted output from the natural-language model module and to categorize it – “Specifically, the edge device machine that processes the recorded conversations from FIG. 2 is shown in FIG. 3, along with a presentation of the output text on the monitor. The edge machine device performs voice dictation by processing the recorded voices using the pre-installed deep learning models for speech recognition. The models detect only words and phrases that are relevant to the products and the services that the particular store is selling or offering. Irrelevant or sensitive information that customers may have discussed is discarded, with the words and phrases that are finally kept, after the filtering, shown at the monitor connected to the edge machine device. At this stage, the store owner can review customers' feedback, reviews, complaints on products, services, prices, and etc. Additionally, the store owner can perform personnel assessment by reading the comments that customers make regarding staff and/or shop assistants.” (CHAU: [0041]). However, CHAU does not disclose that the claimed system can ascribe to an output a positive or negative perception. Although in CHAU the system can classify the information, determine relevancy to the products available in the store and classify the sensitivity of the information, it does not teach the system performing a positive or negative sentiment analysis as well as ascribing the perception to the output and left that task to the human monitoring the system e.g. staff and/or shop assistants. Lee further teaches ascribe to an output a positive or negative perception. – “In some forms, the sentiment module 114 is configured to determining whether each of the one or more packaging related review is a negative review or a positive review based a sentiment model. In one form, the sentiment model includes a machine learning model that categorizes one or more text words as a positive review or a negative review.” (Lee: [0066]). For one of ordinary skill in the art before the effective filing date of the claimed invention it would have been obvious to modify Chau in view of Brussolo based on the teachings of Lee to leverage the machine learning classifier to apply that sentiment filter (to determine positive or negative) to the information acquired from the audio file and adding that classification tag to the review. The main motivation would be that words and phrases that are related to products, brands and services that the customers wish to either purchase or not purchase can be captured since customers generally express their preference on the like or dislike of certain products that they have purchased in the past to their friends (CHAU [0015]). As well as, words or phrases that provide certain information on the ways the stores are operated can be captured since customers generally talk with their friends by expressing openly their satisfaction or dissatisfaction with respect to prices, offered discounts and packages, shop assistants, and/or other employers (CHAU [0015]). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over CHAU, in view of Brennan Eul I. Mercado (US 8756060 B2) hereinafter Mercado and Jaekyu SHIM et al. (US 20220044670 A1) hereinafter Shim. In regards to claim 8, CHAU teaches: b. converting each active output from an analog voice signal to an equivalent digital signal; - “To start, the present disclosure allows recording of audio streaming human-voice through microphones that are installed inside retail stores, restaurants or any establishment venue where customers aim at purchasing products or services, and has a processing framework that processes concurrently multiple continuous audio streams of human speech/voices simultaneously.” (CHAU: [0019]). c. converting the equivalent digital signal to equivalent text; - “Automatic Speech Recognition receives as an input an audio, generates a script out of the human speech, and outputs the text script; whereas Natural Language Processing receives as input a text document, which is analyzed and broken down to sentences and word-tokens, i.e., document tokenization, and outputs relationships of words such as their frequency occurrences, word correlation, words frequently grouped together, topic detection, and etc.” (CHAU: [0033]) d. filtering by a natural-language model module the equivalent text for any predetermined key word; - “Automatic Speech Recognition receives as an input an audio, generates a script out of the human speech, and outputs the text script; whereas Natural Language Processing receives as input a text document, which is analyzed and broken down to sentences and word-tokens, i.e., document tokenization, and outputs relationships of words such as their frequency occurrences, word correlation, words frequently grouped together, topic detection, and etc. The goal is to detect and collect keywords and phrases that are relevant with the store only (useful). By using pretrained voice/speech recognition models, conversations of the customers that contain private or sensitive information are discarded, and only those words and phrases that are relevant to the particular business (useful) are kept.” (CHAU: [0033]) e. capturing conversation thread; - “In order to achieve the above-mentioned objects, the present disclosure performs voice processing in two phases. In the first phase, a voice dictation is performed, whereby human speech is converted to text.” (CHAU: [0017]) f. categorizing of the conversation thread by a machine-language module; - “Automatic Speech Recognition receives as an input an audio, generates a script out of the human speech, and outputs the text script; whereas Natural Language Processing receives as input a text document, which is analyzed and broken down to sentences and word-tokens, i.e., document tokenization, and outputs relationships of words such as their frequency occurrences, word correlation, words frequently grouped together, topic detection, and etc. The goal is to detect and collect keywords and phrases that are relevant with the store only (useful). By using pretrained voice/speech recognition models, conversations of the customers that contain private or sensitive information are discarded, and only those words and phrases that are relevant to the particular business (useful) are kept.” (CHAU: [0033]) g. associating a verbatim conversation thread excerpt with its categorization result; - “FIG. 4 shows the edge device being connected to the store database that contains products, service names and tags, brand names and category names and tags, price ranges and related tags, product names, attributes, properties and tags, other tags, and other database entities. The edge device can access the database for the products, brands, services, categories, and their corresponding tags. Thereafter, the words of the customers that are kept after the filtering, as shown in FIG. 3, are matched with these accessed elements, thereby locating similar products and services and hence, resulting in recommendations.” (CHAU: [0042]). However, CHAU does not disclose the limitations: “a. multiplexing outputs of a plurality of microphones such that an output from only a single microphone is active at a time;”, “if no key word found, continuing steps a through d; or if a key word is found, halting further multiplexing;” and “h. determining if the thread is continuing; if thread continues, then continuing steps e though h; if thread ends, then resuming steps a through d.”. Mercado teaches: a. multiplexing outputs of a plurality of microphones such that an output from only a single microphone is active at a time; - Mercado teaches: “As seen in FIG. 2, control circuit 200 may suitably include a programmed microprocessor 210 which receives audio signal inputs from one or more microphones 220, 220.sub.1, 220.sub.2, . . . 220.sub.n (collectively 220). Where plural microphones are employed, each analog signal output may be passed through a noise filter 222, 222.sub.1, 222.sub.2, . . . 222.sub.n (collectively 222) buffered by a buffer 224 and the outputs of the buffer is then provided to microprocessor 210. Multiplexing may be provided as needed by a particular sign design environment.” (Mercado: column 2 line 42 – line 51). Furthermore, Mercado teaches “In step 304, the raw local data from step 302 is processed to extract local trend data, such as keywords contained therein. For example, audio signals from microphones 220 are filtered, buffered and fed to microprocessor 210. Where several microphones are employed, the outputs may be multiplexed. Microprocessor 210 then utilizes voice recognition software 254 which may suitably include keyword extraction software to extract keywords.” (Mercado: column 3 line 54 – line 58). For one of ordinary skill in the art before the effective filing date of the claimed invention it would have been obvious to modify Chau in view of Mercado to include a multiplexer to manage the information generated by “multiple microphone devices installed in the business establishment for simultaneously collecting and recording multiple continuous audio streams of conversations from customers in the business establishment” (CHAU: [Abstract]). However, the combination of CHAU and Mercado does not teach explicitly the limitations were “if no key word found, continuing steps a through d; or if a key word is found, halting further multiplexing;” as well as “h. determining if the thread is continuing; if thread continues, then continuing steps e though h; if thread ends, then resuming steps a through d.”. SHIM teaches: if no key word found, continuing steps a through d; or if a key word is found, halting further multiplexing; - “According to an embodiment, the mic array 510 may obtain a voice utterance from a user. A voice utterance may include a wake-up utterance that directs activation or calling of an intelligent assistance service, and/or a control utterance that directs operation (e.g., power control or volume control) of a hardware/software configuration included in a control device. The wake-up utterance may be a predetermined keyword (e.g., wake-up keyword) such as “hi,” “hello,” “ABC,” or the like. For example, ABC may be a name assigned to the electronic device (or the voice recognition agent (or artificial intelligence (AI)) of the electronic device), such as Galaxy or the like. In addition, a control utterance may be obtained in the state in which an intelligent assistance service is activated or called by a wake-up utterance. However, this is merely an example, and the embodiment of the disclosure is not limited thereto. For example, a voice utterance including a wake-up utterance and a control utterance may be received via the mic array 510.” (SHIM: [99]). h. determining if the thread is continuing; if thread continues, then continuing steps e though h; if thread ends, then resuming steps a through d. – “Various embodiments of the present invention relate to an electronic device for performing voice recognition using microphones selected on the basis of the operation state, and an operation method of same. According to an embodiment, the electronic device includes: one or more microphone arrays which include a plurality of microphones; at least one processor operatively connected to the microphone arrays; and at least one memory electrically connected to the processor, wherein the memory may store instructions for the processor to, at the time of execution; receive wake-up utterances, for calling designated voice services, by using a first group of microphones among the plurality of microphones when operating in a first state; operate in a second state in response to the wake-up utterances; and receive subsequent utterances using a second group of microphones among the plurality of microphones when operating in the second state. Various other embodiments are also possible.” (SHIM [Abstract]) SHIM further teaches: “In accordance with an aspect of the disclosure, an operation method of an electronic device may include: while operating in a first state, receiving a wake-up utterance, which calls a designated voice service, using first group microphones among a plurality of microphones, and in response to the wake-up utterance, changing a state of the electronic device to a second state; and while operating in the second state, receiving a subsequent utterance using second group microphones among the plurality of microphones.” (SHIM: [0007]) For one of ordinary skill in the art before the effective filing date of the claimed invention it would have been obvious to modify CHAU in view of Mercado and in further view of SHIM in order to use keywords as a control signal for the process, as shown in: “According to an embodiment, the mic array 510 may obtain a voice utterance from a user. A voice utterance may include a wake-up utterance that directs activation or calling of an intelligent assistance service, and/or a control utterance that directs operation (e.g., power control or volume control) of a hardware/software configuration included in a control device. The wake-up utterance may be a predetermined keyword (e.g., wake-up keyword) such as “hi,” “hello,” “ABC,” or the like.” (SHIM [0099]) and to be able to listen a subsequent utterance (continuous thread) as SHIM teaches in: “If a subsequent utterance is received from the first processor 530, the second processor 540 may process the received subsequent utterance in operation 1211. For example, the second processor 540 may perform natural language processing on the received subsequent utterance. According to an embodiment, after processing the subsequent utterance, the second processor 540 may maintain an activated state (e.g., a wake-up mode) in operation 1213. For example, the second processor 540 may process an utterance using the second group microphones 514.” (SHIM [0161]) and to continue to a regular operation if a thread does not continue as SHIM teaches in: “If a subsequent utterance is not received from the first processor 530, the second processor 540 may perform an operation related to operation 1213.” (SHIM [0162]). Further motivation to modify CHAU using the teachings of SHIM in order to find a keyword in a thread and determine if that thread continues is teach by CHAU: “For instance, the captured words and phrases will be related to stores and the way the stores are operated. First, words and phrases that are related to products, brands and services that the customers wish to either purchase or not purchase can be captured since customers generally express their preference on the like or dislike of certain products that they have purchased in the past to their friends. Second, words or phrases that provide certain information on the ways the stores are operated can be captured since customers generally talk with their friends by expressing openly their satisfaction or dissatisfaction with respect to prices, offered discounts and packages, shop assistants, and/or other employers. Indeed, customers are more likely to provide comments, feedback and complaints while talking with their friends. Such real time feedback is an asset in helping the stores to improve their business operations.” (CHAU [0015]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HECTOR J. CRESPO FEBLES whose telephone number is (571)272-4512. The examiner can normally be reached Mon - Fri 7:30 - 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at (571) 272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /H.J.C./ Examiner, Art Unit 2657 12/12/2025 /DANIEL C WASHBURN/Supervisory Patent Examiner, Art Unit 2657
Read full office action

Prosecution Timeline

Apr 29, 2024
Application Filed
Jan 09, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month