Prosecution Insights
Last updated: April 19, 2026
Application No. 18/253,107

INFORMATION PROCESSING DEVICE

Final Rejection §101§102§103§112
Filed
May 16, 2023
Examiner
LEE, JENNIFER V
Art Unit
3688
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Pignus Inc.
OA Round
2 (Final)
25%
Grant Probability
At Risk
3-4
OA Rounds
4y 3m
To Grant
67%
With Interview

Examiner Intelligence

Grants only 25% of cases
25%
Career Allow Rate
59 granted / 232 resolved
-26.6% vs TC avg
Strong +42% interview lift
Without
With
+41.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
28 currently pending
Career history
260
Total Applications
across all art units

Statute-Specific Performance

§101
30.1%
-9.9% vs TC avg
§103
32.6%
-7.4% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
21.8%
-18.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 232 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in reply to the communications filed on July 8, 2025. The Applicant’s Amendment and Request for Reconsideration has been received and entered. Claims 1-6 are currently pending and have been examined. Claims 1-5 have been amended. Claim 6 is newly amended. Response to Arguments Applicant’s amendments necessitated the new grounds of rejection. Regarding the rejection of claims 1-6 under 35 USC 101, Applicant’s arguments have been fully considered but they are not persuasive for the reasons set forth infra. Additionally, the Examiner respectfully notes that the courts consider a mental process (thinking) that “can be performed in the human mind, or by a human using a pen and paper” to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, “methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’” 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 (2012) (“‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’” (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same). Further, the courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid (e.g., pen and paper or a slide rule) to perform the claim limitation. See, e.g., Benson, 409 U.S. at 67, 65, 175 USPQ at 674-75, 674 (noting that the claimed “conversion of [binary-coded decimal] numerals to pure binary numerals can be done mentally,” i.e., “as a person would do it by head and hand.”); Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1139, 120 USPQ2d 1473, 1474 (Fed. Cir. 2016) (holding that claims to a mental process of “translating a functional description of a logic circuit into a hardware component description of the logic circuit” are directed to an abstract idea, because the claims “read on an individual performing the claimed steps mentally or with pencil and paper”). MPEP 2106.04(a)(2). Correspondingly, the Examiner respectfully argues that “presenting one or more pieces of software to the user based on the one or selection conditions” could be performed by a human using a physical aid (e.g., presenting software through physical mediums such as discs). Additionally, a processor coupled to a storage unit, having control instructions stored thereon which when executed by the processor cause the information processing apparatus to perform a control process and executing control merely implement the abstract idea on a computer environment. Applicant’s remaining arguments have been fully considered but they are not persuasive. Particularly, Applicant’s arguments are directed to the instantly amended claims, and are thus moot in view of the new grounds of rejection. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: all “units” plus their respective functional modifiers recited in claims 1-3. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-6 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA , the applicant, regards as the invention. Claims 1, 2, 4, and 5 each recite the limitation “the technical terms and the technical expressions”. There is insufficient antecedent basis for this limitation in the claims. For examination purposes, Examiner has interpreted “the technical terms and the technical expressions” to mean “the at least one of technical terms and technical expressions”. Claims 2-5 depend from claim 1 and thus inherit the deficiencies of claim 1. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-6 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Step 1. When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. Step 2A – Prong One. If the claims fall within one of the statutory categories, it must then be determined whether the claims recite an abstract idea, law of nature, or natural phenomenon. Step 2A – Prong Two. If the claims recite an abstract idea, law of nature, or natural phenomenon, it must then be determined whether the claims recite additional elements that integrate the judicial exception into a practical application. If the claims do not recite additional elements that integrate the judicial exception into a practical application, then the claims are directed to a judicial exception. Step 2B. If the claims are directed to a judicial exception, it must be evaluated whether the claims recite additional elements that amount to an inventive concept (i.e. “significantly more”) than the recited judicial exception. In the instant case, claims 1-3 and 6 are directed to a machine; claim 4 is directed to a process; and claim 5 is directed to a manufacture. A claim “recites” an abstract idea if there are identifiable limitations that fall within at least one of the groupings of abstract ideas enumerated in MPEP 2106. In the instant case, claim 4, and similarly claims 1 and 5, recite the steps of: a processor coupled to a storage unit, having control instructions stored thereon which when executed by the processor cause the information processing apparatus to perform a control process comprising: accepting a request of a user for software in a form including a general first expression form, wherein the general first expression form is associated with at least one of technical terms and technical expressions used in the selection conditions for selecting software, and includes at least one of terms and expressions different from the technical terms and the technical expressions, and accepting the request by subdividing the request; converting the request accepted in the general first expression form into a technical second expression form, including converting at least one of the terms and expressions included in the general first expression form into at least one of the associated technical terms and technical expressions; setting one or more selection conditions for selecting software based on the request converted in the second expression form; and executing control to present one or more pieces of software to the user based on the one or more selection conditions. -- these claim limitations set forth certain methods of organizing human activity, particularly commercial interactions including advertising, marketing, and sales activities/behaviors. Additionally, these steps set forth mental processes, particularly concepts performed in the human mind, including, inter alia, the observation and evaluation of information. Further, the limitations of the claims are not indicative of integration into a practical application. Taking the claim elements separately, the additional elements of performing the steps via a processor coupled to a storage unit, having control instructions stored thereon which when executed by the processor cause the information processing apparatus to perform a control process, software, and by executing control -- merely implement the abstract idea on a computer environment. Considered in combination, the steps of Applicant’s method add nothing that is not already present when the steps are considered separately. The remaining claim limitations recited in dependent claims merely narrow the abstract idea and do not recite further additional elements. Thus, claims 1-6 are directed to an abstract idea. Regarding the independent claims, the technical elements of performing the steps via a processor coupled to a storage unit, having control instructions stored thereon which when executed by the processor cause the information processing apparatus to perform a control process, software, and by executing control merely implement the abstract idea on a computer environment. Additionally, the dependent claims do not recite further technical elements. When considering the elements and combinations of elements, the claim(s) as a whole, do not amount to significantly more than the abstract idea itself. This is because the claims do not amount to an improvement to another technology or technical field; the claims do not amount to an improvement to the functioning of a computer itself; the claims do not move beyond a general link of the use of an abstract idea to a particular technological environment; the claims merely amounts to the application or instructions to apply the abstract idea on a computer; or the claims amounts to nothing more than requiring a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry. The analysis above applies to all statutory categories of invention. Accordingly, claims 1-6 are rejected as ineligible for patenting under 35 USC 101 based upon the same rationale. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-5 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Stolze (US PGP 2002/0004764). As per claim 1, Stolze teaches an information processing apparatus for selecting software including functions required by a user, comprising: a processor coupled to a storage unit, having control instructions stored thereon which when executed by the processor cause the information processing apparatus to perform a control process comprising: (Stolze: [0035]) accepting a request of a user for software in a form including a general first expression form, wherein the general first expression form is associated with at least one of technical terms and technical expressions used in the selection conditions for selecting software, and includes at least one of terms and expressions different from the technical terms and the technical expressions, and accepting the request by subdividing the request; (Stolze: Figs. 6A-6C; [0039] (At any stage in this process, the user may request needs-based interview assistance by selecting an option from the screen display. When this occurs, the catalog engine 5 issues a request via interface 18 to interview management component 3 for a suitable question to be presented to the user. This question request indicates the session id assigned by catalog engine 5 for the particular user session, and also data indicating the current set of products which have been determined as potentially suitable by catalog engine 5 based on the feature-based filtering conducted thus far. This data can be any data which identifies the products, e.g. a list of product names, or the addresses of the product descriptions in database 4. On receipt of the question request, the session manager 24 of control logic 8 identifies the request as relating to a new interview session from the session id supplied by catalog engine 5, and may, if desired, assign a new “interview session id”, or may simply log the supplied session id as the identifier for distinguishing between requests relating to different user sessions. The question request is forwarded (with the new interview session id if appropriate) to QA planner 22. . . After this initialization step, the QA planner 22 uses the retrieved data to select the best question to be presented to the user, and then supplies this question (and associated answers) to the catalog engine 5 via session manager 24.); Fig. 5; [0054]-[0057] (FIGS. 3 and 4 describe the basic question selection process performed by the QA planner 22, but, as described earlier, a needs-based interviewing session may involve selection of successive questions at the request of the feature-based catalog engine 5. The flow chart of FIG. 5 illustrates the basic steps of a complete interviewing session in the QA planner 22. The new session begins at step 50 and the question request is forwarded to the QA planner by the. session manager 24 as previously described. . . . In step 53 the QA planner selects the best question in accordance with FIGS. 3 and 4 above. In step 54, this question, and associated answers, are supplied to the feature-based (F-B) engine 5 for display to the user. . . . After the user has answered the question, the F-B engine 5 returns the given answer to the interview component 3 where it is received by the QA planner at step 55 of FIG. 5. . . . The session thus continues in accordance with FIG. 5 until no further questions are received at step 59, i.e. until the session is actively terminated by the feature-based engine or QA planner, or, for example, because the session id expires and the session is thus deemed to be terminated.); [0058]-[0103]) converting the request accepted in the general first expression form into a technical second expression form, including converting at least one of the terms and expressions included in the general first expression form into at least one of the associated technical terms and technical expressions; (Stolze: Fig. 5; [0044]-[0053]; [0054]-[0057] (After the user has answered the question, the F-B engine 5 returns the given answer to the interview component 3 where it is received by the QA planner at step 55 of FIG. 5. The QA planner checks in step 56 whether the answer results in firing of any rules. If so, in step 57 the QA planner identifies the feature constraint(s) required by the fired rule(s) and supplies these to the F-B engine 5 via session manager 24. . . . . In this question selection process, the same product scores Preject and answer probabilities Panswer used in the first question selection process could be employed. However, provision of the prediction engine 20 in this preferred embodiment allows the QA planner to retrieve a new set of product scores Preject and answer probabilities Panswer from the prediction engine database 21 based on the current status of the interview session, and in particular the current set of needs and rejected products resulting from previous question-answer cycles. Using generally known techniques which need not be described in detail here, the prediction engine is automatically “trained” by monitoring interview sessions, accumulating statistical data about how users answer the various questions, and which products are rejected as a result, for different stages in interview sessions. Based on this statistical data, the prediction engine can derive (and continually refine) different values for product scores and answer probabilities applicable in different circumstances. These predicted values are stored in database 21. Thus, when selecting the second and subsequent questions in a given interview session, the QA planner 22 can request the applicable values for the current set of known user needs, according to the current status of the interview, from the prediction engine 20, and these values, if available, can be used in place of the original product score and answer probability values.); Figs. 6A-6C; [0058]-[0103] (need-1 <presentations> with prompt “Do you intend to do presentations with your computer?” and possible answers <yes> (p=0.3) and <no> (p=0.7) need-2 <main use> with prompt “What will you mainly use your computer for?” and possible answers <game playing> (p=0.3), <word processing> (p=0.3), <spread sheets> (p=0.2), <publishing image processing> (p=0.2), <program development or CAD> (p=0.2), <data storage> need-3 <data> with prompt “Will you use your computer to store client data and/or document collections?” and possible answers <yes> (p=0.3), and <no> (p=0.7) need-4 <modelRange> with prompt “How to you want to use your computer?” and possible answers <private use> (p=0.3), <business use> (p=0.5), <technical use> (p=0.2). rule-1: if-needs (<presentations>=<yes>) then-require (<LCD screen type>=<active>) rule-2: if-needs (<main use>=<game playing>) then-require ((<processor speed>=<300>) OR (<processor speed>=<400>)) rule-3: if-needs ((<modelRange>=<business use>) AND (<main use>=<word processing>)) then-require (((<processor speed>=<300>) OR (<processor speed>=<400>)) AND ((<hard disk GB>=<3>) OR (<hard disk GB>=<4>))) rule-4: if-needs ((<modelRange>=<business use>) AND (<main use>=<spread sheets>)) then-require (<processor speed>=<400>)). setting one or more selection conditions for selecting software based on the request converted in the second expression form; and (Stolze: Fig. 5; [0054]-[0057] (After the user has answered the question, the F-B engine 5 returns the given answer to the interview component 3 where it is received by the QA planner at step 55 of FIG. 5. The QA planner checks in step 56 whether the answer results in firing of any rules. If so, in step 57 the QA planner identifies the feature constraint(s) required by the fired rule(s) and supplies these to the F-B engine 5 via session manager 24. The engine 5 then determines from the constraint(s) whether any products are excluded from the set of potentially suitable products, and displays the resulting product set to the user.); Figs. 6A-6C; [0058]-[0103] (rule-1: if-needs (<presentations>=<yes>) then-require (<LCD screen type>=<active>) rule-2: if-needs (<main use>=<game playing>) then-require ((<processor speed>=<300>) OR (<processor speed>=<400>)) rule-3: if-needs ((<modelRange>=<business use>) AND (<main use>=<word processing>)) then-require (((<processor speed>=<300>) OR (<processor speed>=<400>)) AND ((<hard disk GB>=<3>) OR (<hard disk GB>=<4>))) rule-4: if-needs ((<modelRange>=<business use>) AND (<main use>=<spread sheets>)) then-require (<processor speed>=<400>) . . . . Thus, needs-question <presentations> is the only one receiving a score (<data> is never used in a rule in this example) and thus gets asked as shown in FIG. 6 c. In this scenario, the user answers <yes> to this question which makes rule 1 fire, and laptop 4 is eliminated from the product set by feature-based engine 5. Thus the user has been successfully led to identification of a suitable product, namely laptop 2.). executing control to present one or more pieces of software to the user based on the one or more selection conditions. (Stolze: Fig. 5; [0054]-[0057] (The engine 5 then determines from the constraint(s) whether any products are excluded from the set of potentially suitable products, and displays the resulting product set to the user.); [0058]-[0103] (Thus, needs-question <presentations> is the only one receiving a score (<data> is never used in a rule in this example) and thus gets asked as shown in FIG. 6 c. In this scenario, the user answers <yes> to this question which makes rule 1 fire, and laptop 4 is eliminated from the product set by feature-based engine 5. Thus the user has been successfully led to identification of a suitable product, namely laptop 2.)). Examiner Note: The Examiner notes, with regard to the limitation reciting “accepting a request of a user for software in a form including a general first expression form, wherein the general first expression form is associated with at least one of technical terms and technical expressions used in the selection conditions for selecting software, and includes at least one of terms and expressions different from the technical terms and the technical expressions, and accepting the request by subdividing the request;” (emphasis added), the bolded portions are is merely a statement of intended use as the language does not result in a manipulative difference between the claimed invention and the prior art. See, e.g., In re Otto, 312 F.2d 937, 938, 136 USPQ 458, 459 (CCPA 1963) (The claims were directed to a core member for hair curlers and a process of making a core member for hair curlers. Court held that the intended use of hair curling was of no significance to the structure and process of making.) As per claim 2, Stolze teaches wherein the accepting the request by subdividing the request accepts the request of the user in the first general expression form by repeating questioning to and answering from the user, and includes extracting at least one of the terms and the expressions in the general first expression form that corresponds to at least one of the technical terms and the technical expressions associated with items required as the selection conditions for selecting software. (Stolze: [0039] (At any stage in this process, the user may request needs-based interview assistance by selecting an option from the screen display. When this occurs, the catalog engine 5 issues a request via interface 18 to interview management component 3 for a suitable question to be presented to the user.); Fig. 5; [0054]-[0057] (FIGS. 3 and 4 describe the basic question selection process performed by the QA planner 22, but, as described earlier, a needs-based interviewing session may involve selection of successive questions at the request of the feature-based catalog engine 5. The flow chart of FIG. 5 illustrates the basic steps of a complete interviewing session in the QA planner 22. The new session begins at step 50 and the question request is forwarded to the QA planner by the. session manager 24 as previously described. . . . In step 53 the QA planner selects the best question in accordance with FIGS. 3 and 4 above. In step 54, this question, and associated answers, are supplied to the feature-based (F-B) engine 5 for display to the user. . . . After the user has answered the question, the F-B engine 5 returns the given answer to the interview component 3 where it is received by the QA planner at step 55 of FIG. 5. . . . Next, in step 59 the QA planner awaits a further question request relating to the current interview session. If the user needs further assistance (after further feature-based filtering or otherwise), a new question request is received from engine 5 under the same session id. Operation then reverts to step 53 where the next best question is selected based on the new product set defined in the question request. . . . Using generally known techniques which need not be described in detail here, the prediction engine is automatically “trained” by monitoring interview sessions, accumulating statistical data about how users answer the various questions, and which products are rejected as a result, for different stages in interview sessions. Based on this statistical data, the prediction engine can derive (and continually refine) different values for product scores and answer probabilities applicable in different circumstances. . . . Thus, when selecting the second and subsequent questions in a given interview session, the QA planner 22 can request the applicable values for the current set of known user needs, according to the current status of the interview, from the prediction engine 20, and these values, if available, can be used in place of the original product score and answer probability values. The session thus continues in accordance with FIG. 5 until no further questions are received at step 59, i.e. until the session is actively terminated by the feature-based engine or QA planner, or, for example, because the session id expires and the session is thus deemed to be terminated.); Figs. 6A-6C; [0058]-[0103] (need-1 <presentations> with prompt “Do you intend to do presentations with your computer?” and possible answers <yes> (p=0.3) and <no> (p=0.7) need-2 <main use> with prompt “What will you mainly use your computer for?” and possible answers <game playing> (p=0.3), <word processing> (p=0.3), <spread sheets> (p=0.2), <publishing image processing> (p=0.2), <program development or CAD> (p=0.2), <data storage> need-3 <data> with prompt “Will you use your computer to store client data and/or document collections?” and possible answers <yes> (p=0.3), and <no> (p=0.7) need-4 <modelRange> with prompt “How to you want to use your computer?” and possible answers <private use> (p=0.3), <business use> (p=0.5), <technical use> (p=0.2).) As per claim 3, Stolze teaches wherein an answer to a question in the questioning and answering is composed of choices that are a plurality of selectable answers which include at least one of the terms and the expressions in the general first expression form and wherein converting the request into the technical second expression form includes converting the answer into at least one of the technical terms and technical expressions. (Stolze: Fig. 5; Figs. 6A-6C; [0054]-[0057] (In step 53 the QA planner selects the best question in accordance with FIGS. 3 and 4 above. In step 54, this question, and associated answers, are supplied to the feature-based (F-B) engine 5 for display to the user. . . . After the user has answered the question, the F-B engine 5 returns the given answer to the interview component 3 where it is received by the QA planner at step 55 of FIG. 5.); Figs. 6A-6C; [0058]-[0103] (need-1 <presentations> with prompt “Do you intend to do presentations with your computer?” and possible answers <yes> (p=0.3) and <no> (p=0.7) need-2 <main use> with prompt “What will you mainly use your computer for?” and possible answers <game playing> (p=0.3), <word processing> (p=0.3), <spread sheets> (p=0.2), <publishing image processing> (p=0.2), <program development or CAD> (p=0.2), <data storage> need-3 <data> with prompt “Will you use your computer to store client data and/or document collections?” and possible answers <yes> (p=0.3), and <no> (p=0.7) need-4 <modelRange> with prompt “How to you want to use your computer?” and possible answers <private use> (p=0.3), <business use> (p=0.5), <technical use> (p=0.2).) the acceptance unit comprises label applying unit, the label applying unit applying a label to at least one answer among the choices. (Stolze: Fig. 5; [0054]-[0057] (In step 54, this question, and associated answers, are supplied to the feature-based (F-B) engine 5 for display to the user. . . . When supplying the answers here, the QA planner need not supply all the answers which are defined in the Needs data as associated with a particular question. In particular, the QA planner preferably selects only those answers which had a non-zero answer score as a result of the rule weight distribution process, since these are the answers which are relevant for causing rules to fire.); Figs. 6A-6C; [0058]-[0103] (need-1 <presentations> with prompt “Do you intend to do presentations with your computer?” and possible answers <yes> (p=0.3) and <no> (p=0.7) need-2 <main use> with prompt “What will you mainly use your computer for?” and possible answers <game playing> (p=0.3), <word processing> (p=0.3), <spread sheets> (p=0.2), <publishing image processing> (p=0.2), <program development or CAD> (p=0.2), <data storage> need-3 <data> with prompt “Will you use your computer to store client data and/or document collections?” and possible answers <yes> (p=0.3), and <no> (p=0.7) need-4 <modelRange> with prompt “How to you want to use your computer?” and possible answers <private use> (p=0.3), <business use> (p=0.5), <technical use> (p=0.2).) As per claim 4, this claim is substantially similar to claim 1 and is therefore rejected in the same manner as this claim, as set forth above. As per claim 5, this claim is substantially similar to claim 1 and is therefore rejected in the same manner as this claim, as set forth above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Stolze in view of Komuves (US PGP 2013/0066885). As per claim 6, Stolze teaches wherein the control process further comprises applying a label to at least one answer among the choices, . . . (Stolze: Figs. 6A-6C; [0036]; [0060]-[0089]) Stolze does not explicitly state the following known technique which is taught by Komuves: . . . the label presenting advice according to a history of statistics of answers selected by a plurality of users. (Forsblom: Fig. 6; [0040] (The screenshot 600 also includes a series of Popularity Modules 615-655 that are associated with the respective Content Objects 610-650. The Popularity Modules 615-655 provide a variety of information related to the user rating of the respective Content Objects 610-650 including the Popularity, Popularity Trend, Percentage Liked, Number of Likes, Number of Dislikes, and Total Number of Views.); Figs 3-4; [0025]-[0038] (disclosing the system calculates a Percentage Liked 330 (liked ratio) based on the total number of user likes and the total number of user ratings.)) This known technique is applicable to the method of Stolze as they both share characteristics and capabilities, namely, they are directed to presenting selectable options to users. One of ordinary skill in the art at the time of filing would have recognized that applying the known technique of Komuves would have yielded predictable results and resulted in an improved method. It would have been recognized that applying the technique of Komuves to the teachings of Stolze would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such history of statistics of answers selected by a plurality of users features into similar methods. Further, applying the label presenting advice according to a history of statistics of answers selected by a plurality of users to the label of Stolze would have been recognized by those of ordinary skill in the art as resulting in an improved method for rating the popularity of content objects in information systems that accounts for the overall number of ratings with respect to other content objects such that the confidence of the rating is improved. (Komuves: Para [0056]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Chitrapura (US Pat No 11,295,375) -- use of custom dimensions to map software application programs to a business user's needs Herling (US PGP 2012/0296759) – determining customer's needs by the questions generated for and answered by the potential customer Kishen (US PGP 2004/0103065) – generate a set of questions and to prompt the customer to provide answers to those questions. Based on the answers provided by the customer, the best suited product may be identified and presented to the customer. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JENNIFER V LEE whose telephone number is (571)272-4778. The examiner can normally be reached Monday - Friday 9AM - 5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JEFFREY A. SMITH can be reached at (571)272-6763. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JENNIFER V LEE/Examiner, Art Unit 3688 /Jeffrey A. Smith/ Supervisory Patent Examiner, Art Unit 3688
Read full office action

Prosecution Timeline

May 16, 2023
Application Filed
Apr 04, 2025
Non-Final Rejection — §101, §102, §103
Jul 08, 2025
Response Filed
Oct 12, 2025
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602716
SYSTEMS AND METHODS FOR GENERATING RECOMMENDATIONS BASED ON ONLINE HISTORY INFORMATION AND GEOSPATIAL DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12548069
METHOD, SYSTEM, AND COMPUTER-READABLE STORAGE MEDIUM FOR AUGMENTED REALITY-BASED FACE AND CLOTHING EFFECT
2y 5m to grant Granted Feb 10, 2026
Patent 12541782
METHOD, SYSTEM, AND COMPUTER-READABLE STORAGE MEDIUM FOR PRODUCT OBJECT PUBLISHING AND CONCURRENT IMAGE RECOGNITION
2y 5m to grant Granted Feb 03, 2026
Patent 12461976
METHOD AND SYSTEM FOR CAPTURING DATA FROM REQUESTS TRANSMITTED ON WEBSITES
2y 5m to grant Granted Nov 04, 2025
Patent 12462289
DEVICE, METHOD, AND COMPUTER-READABLE MEDIA FOR RECOMMENDATION NETWORKING BASED ON CONNECTIONS, CHARACTERISTICS, AND ASSETS USING MACHINE LEARNING
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
25%
Grant Probability
67%
With Interview (+41.5%)
4y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 232 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month