DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 25 February 2026 has been entered.
This is a response to Applicant’s amendment filed on 25 February 2026, wherein:
Claims 1 and 7 are amended.
Claims 2 and 8 are canceled.
Claims 3-6 and 9-12 are original.
Claims 1, 3-7, and 9-12 are pending.
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date as follows:
The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994)
The disclosure of the prior-filed applications, KR10-2023-0133128 and KR10-2024-111014, fail to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application.
In particular, the disclosure of the prior-filed applications fail to provide sufficient written description for “outputting structured video content for each evaluation index so as to elicit an interaction to an examinee at each stimulus time-frame of a preset timeline;… analyzing the response data for each evaluation index using an Artificial Intelligence (AI) analysis module corresponding to the evaluation index, wherein the structured video content is configured using a scenario for eliciting the interaction based on a characteristic of the examinee and a protocol presented by a test tool, wherein the response data includes visual and auditory information corresponding to at least one of eye contact, name-calling response, joint attention, imitation behavior, pointing gesture, social referencing, social smiling, and language behavior, wherein the AI analysis module includes at least one of a gaze tracking module, a facial expression recognition module, a head pose estimation module, and a plurality of evaluation-specific analysis modules corresponding to respective evaluation indices, each configured to detect a response of the examinee during the corresponding response timeframe,… encoding data including the preset timeline and the evaluation indices of the collected response data;… decoding the encoded response data together with the preset timeline and the evaluation indices to perform structured AI analysis; and selecting, for each evaluation index, an AI analysis module suited to an analysis purpose corresponding to the evaluation index, and sequentially or in parallel driving the selected AI analysis modules in consideration of limited computing resources” in claims 1 and 7 and “outputting a result of analyzing the response data for each evaluation index” in claims 6 and 12 to show one of ordinary skill in the art that Applicant had possession of the claimed invention. Claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See MPEP 2161.01(I). In particular, the specification of the prior-filed application, at best, merely recites similar language as the claims without providing any substantive description for the claimed limitations identified above for the same reasons that the instant specification also fails as identified in the rejections of the claims under 35 USC 112(a) below for the same claim limitations.
Thus, claims 1, 3-7, and 9-12 do not gain benefit of priority to KR10-2023-0133128 and KR10-2024-111014. Therefore, claims 1, 3-7, and 9-12 have an effective filing date of 26 September 2024.
Specification
The disclosure is objected to because of the following informalities:
Para. 43 recites “an time-frame”. This should be “a time-frame”.
It is further unclear why “time frame” is hyphenated such that it is written “time-frame” throughout the disclosure.
Appropriate correction is required.
Claim Objections
Claims 1, 3-7, and 9-12 are objected to because of the following informalities:
Claims 1 and 7 are inconsistently formatted. Some limitations end with a comma, while others end with a semi-colon. Uniformity is recommended.
The limitations and sub-limitations in claims 1 and 7 are all indented the same. This decreases clarity. Sub-limitations should be further indented to clearly identify that they are sub-limitations of a preceding limitation.
Dependent claims 3-6 and 9-12 inherit the deficiencies of their respective parent claims, and are thus objected to under the same rationale.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 1, 3-7, and 9-12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claims 1 and 7, it is unclear what constitutes the metes and bounds of “decoding the encoded response data together with the preset timeline and the evaluation indices to perform structured AI analysis”. In particular, the encoding step recites “encoding data including the preset timeline and the evaluation indices of the collected response data” and is immediately followed by the limitation “storing… or transmitting the encoded response data…”. Thus, “the encoded response data” is only identified to explicitly include the preset timeline and the evaluation indices of the collected response data. This causes a lack of clarity because the “decoding” step distinguishes the encoded response data as separate from the preset timeline and the evaluation indices. Thus, it is unclear what constitutes “the encoded response data” when the encoding step identifies that the preset timeline and the evaluation indices are the encoded response data but the decoding step identifies the encoded response data as separate from the preset timeline and the evaluation indices. Therefore, one of ordinary skill in the art would not be apprised of the metes and bounds of the patent protection sought. For the purposes of compact prosecution, the preset timeline and the evaluation indices are construed as included in the encoded response data as recited in the encoding step and in the “storing or transmitting” step such that the decoding step recites “decoding the encoded response data to perform structured AI analysis”. Dependent claims 3-6 and 9-12 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Further regarding claims 1 and 7, it is unclear what constitutes the metes and bounds of “selecting, for each evaluation index, an AI analysis module suited to an analysis purpose corresponding to the evaluation index, and sequentially or in parallel driving the selected AI analysis modules in consideration of limited computing resources.” In particular, the language “and sequentially or in parallel driving the selected AI analysis modules in consideration of limited computing resources” is grammatically incorrect causing one of ordinary skill in the art to not be apprised of the metes and bounds of the patent protection sought. For instance, it is unclear whether “and sequentially” is tied to the “selecting” language or not. If not, “sequentially or in parallel driving the selected AI analysis modules in consideration of limited computing resources” also is unclear because it is unclear how “sequentially or in parallel” is applied to this language. For the purposes of compact prosecution, “sequentially or in parallel” is construed as describing the “driving” such that the limitation recites “and driving the selected AI analysis modules, sequentially or in parallel, wherein the driving is performed in consideration of limited computing resources”. Dependent claims 3-6 and 9-12 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
The term “limited computer resources” in each of claims 1 and 7 is a relative term which renders each claim indefinite. The term “limited computer resources” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. This, in turn, causes one of ordinary skill in the art to not be apprised of the metes and bounds of “in consideration of limited computing resources”. In other words, it is unclear how any function, let alone driving the selected AI analysis modules, can be performed “in consideration of limited computing resources” when the disclosure neither defines nor provides a standard for ascertaining how computing resources may be identified as “limited”. Dependent claims 3-6 and 9-12 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
The text of those sections of Title 35, U.S. Code 112(a) not included in this action can be found in a prior Office action.
Claims 1, 3-7, and 9-12 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding claims 1 and 7, the disclosure fails to provide sufficient written description for “outputting structured video content for each evaluation index so as to elicit an interaction to an examinee at each stimulus time-frame of a preset timeline” and “wherein the structured video content is configured using a scenario for eliciting the interaction based on a characteristic of the examinee and a protocol presented by a test tool” to show one of ordinary skill in the art that Applicant had possession of the claimed invention. Claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See MPEP 2161.01(I). The disclosure, at best, merely recites similar language as the claim without any meaningful description of the claimed functionality. See, for example, at least para. 42, 59-62 and 80-83 of the specification. In particular, Applicant asserted that para. 42, 60, and 81 of the specification provide support for these limitations (see Remarks filed 27 November 2025 at pg. 5). However, each of these paragraphs are silent regarding these limitations. Para. 42 is closer than either of 60 and 81. Yet, it is silent regarding “structured video content”, let alone any particular structured video content meant to “elicit an interaction at each stimulus time-frame” or “eliciting the interaction based on a characteristic of the examinee and a protocol presented by a test tool”. Furthermore, the disclosure is silent regarding any “characteristics of the examinee” as well as silent regarding any “protocols presented by a test tool”. Dependent claims 3-6 and 9-12 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Regarding claims 1, 6, 7, and 12, the disclosure fails to provide sufficient written description for “analyzing the response data for each evaluation index using an Artificial Intelligence (AI) analysis module corresponding to the evaluation index,… wherein the response data includes visual and auditory information corresponding to at least one of eye contact, name-calling response, joint attention, imitation behavior, pointing gesture, social referencing, social smiling, and language behavior, wherein the AI analysis module includes at least one of a gaze tracking module, a facial expression recognition module, a head pose estimation module, and a plurality of evaluation-specific analysis modules corresponding to respective evaluation indices, each configured to detect a response of the examinee during the corresponding response timeframe” in claims 1 and 7 and “outputting a result of analyzing the response data for each evaluation index” in claims 6 and 12 to show one of ordinary skill in the art that Applicant had possession of the claimed invention. Claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See MPEP 2161.01(I). The disclosure, at best, merely recites similar language as the claim without any meaningful description of any artificial intelligence analysis. See, for example, at least Fig. 2-5 which merely illustrate AI Analysis Modules as black boxes and para. 35, 42-46, 52, 58, 63, 73-76, 83, 85, 90, 93-97, 99, and 110 of the specification. In particular, while Applicant asserts that para. 42, 43, 45, 46, 74-76, 83, 85,96, and 97 of the specification provide support for these limitations (see Remarks filed 27 November 2025 at pg. 6), the most disclosure is found in para. 46, 76, and 97 which merely recite the intended use that non-descript AI analysis modules “correspond to” - “gaze tracking for eye contact detection, facial expression recognition for social smiling detection, and head pose estimation for name-calling response detection”. Thus, the disclosure is silent regarding any meaningful description of the steps, calculations, or algorithms necessary to perform the claimed functionality. Dependent claims 3-6 and 9-12 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Further regarding claims 1 and 7, the disclosure fails to provide sufficient written description for “encoding data including the preset timeline and the evaluation indices of the collected response data” and “decoding the encoded response data together with the preset timeline and the evaluation indices to perform structured AI analysis” to show one of ordinary skill in the art that Applicant had possession of the claimed invention. Claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See MPEP 2161.01(I). The disclosure, at best, merely recites similar language as the claim without any meaningful description of any artificial intelligence analysis. See, for example, at least Fig. 4 which merely illustrates a collected data encoding unit 406 and a response collected data decoding unit 408 as non-descript black boxes and para. of the specification. In particular, Applicant asserts that para. 70-75 of the specification provide support for these limitations (see current Remarks at pg. 6). Within these paragraphs, para. 70 and 72 are specific to these limitations but merely recite similar language as the claim limitations without any description. Similarly, para. 91 and 93, with respect to Fig. 5, recite “at step S503, data including the timelines and evaluation indices of the collected response data may be encoded” and “at step S503, the encoded response data, together with the timelines and the evaluation indices, may be decoded so as to perform structured AI analysis on the encoded response data”, respectively. However, in Fig. 5, step S503 recites “analyze responses for respective evaluation elements” and nothing else. Thus, the disclosure is silent regarding any meaningful description of the steps, calculations, or algorithms necessary to perform the claimed functionality. Dependent claims 3-6 and 9-12 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Further regarding claims 1 and 7, the disclosure fails to provide sufficient written description for “selecting, for each evaluation index, an AI analysis module suited to an analysis purpose corresponding to the evaluation index, and sequentially or in parallel driving the selected AI analysis modules in consideration of limited computing resources” to show one of ordinary skill in the art that Applicant had possession of the claimed invention. Claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See MPEP 2161.01(I). The disclosure, at best, merely recites similar language as the claim without any meaningful description of any artificial intelligence analysis. See, for example, at least Fig. 2 and 3 which merely illustrated numbered AI Analysis Modules as non-descript black boxes, Fig. 4 which merely illustrates AI Analysis Module Selection Unit 409 and Response Detection AI Analysis Module Driving Unit 410 as non-descript black boxes, and para. 42-46, 52, 58, 63, 73-76, 93-97, and 110 of the specification. In particular, Applicant asserts that para. 70-75 of the specification provide support for these limitations (see current Remarks at pg. 6). Within these paragraphs, para. 73-75 are specific to these limitations. Para. 73 and 75 merely recite similar language as the claim while para. 74 merely recites, in results-based language, that “the AI analysis module selection unit 409 may include AI analysis modules that are capable of effectively analyzing expected responses depending on respective evaluation indices based on the collected response data” without any actual description. Similarly, para. 46, 76, and 97 merely recite the intended use that these non-descript AI analysis modules “correspond to” - “gaze tracking for eye contact detection, facial expression recognition for social smiling detection, and head pose estimation for name-calling response detection”. Thus, the disclosure is silent regarding any meaningful description of the steps, calculations, or algorithms necessary to perform the claimed functionality. Dependent claims 3-6 and 9-12 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Claim Rejections - 35 USC § 101
The text of those sections of Title 35, U.S. Code 101 not included in this action can be found in a prior Office action.
Claims 1, 3-7, and 9-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without including additional elements that are sufficient to amount to significantly more than the judicial exception itself.
Step 1
The claims are directed to a method and a product which fall under the four statutory categories (STEP 1: YES).
Step 2A, Prong 1
Independent claim 1 recites:
An interaction-based artificial intelligence analysis apparatus, comprising:
one or more processors; and
a memory configured to store at least one program that is executed by the one or
more processors,
wherein the at least one program is configured to:
output structured video content for each evaluation index so as to elicit an
interaction to an examinee at each stimulus time-frame of a preset timeline,
collect response data of the examinee for each evaluation index through a camera
and a microphone at each response time-frame of the preset timeline, and
analyze the response data for each evaluation index using an Artificial
Intelligence (AI) analysis module corresponding to the evaluation index,
wherein the structured video content is configured using a scenario for eliciting the interaction based on a characteristic of the examinee and a protocol presented by a test tool,
wherein the response data includes visual and auditory information corresponding to at least one of eye contact, name-calling response, joint attention, imitation behavior, pointing gesture, social referencing, social smiling, and language behavior,
wherein the AI analysis module includes at least one of a gaze tracking module, a facial expression recognition module, a head pose estimation module, and a plurality of evaluation-specific analysis modules corresponding to respective evaluation indices, each configured to detect a response of the examinee during the corresponding response timeframe,
wherein the at least one program is further configured to:
encode data including the preset timeline and the evaluation indices of the collected response data;
store the encoded response data in a storage device or transmit the encoded response data to another device;
decode the encoded response data together with the preset timeline and the evaluation indices to perform structured AI analysis; and
select, for each evaluation index, an AI analysis module suited to an analysis purpose corresponding to the evaluation index, and sequentially or in parallel drive the selected AI analysis modules in consideration of limited computing resources.
Independent claim 7 recites:
An interaction-based artificial intelligence analysis method performed by an interaction-based artificial intelligence analysis apparatus, comprising:
outputting structured video content for each evaluation index so as to elicit an interaction to an examinee at each stimulus time-frame of a preset timeline;
collecting response data of the examinee for each evaluation index through a camera and a microphone at each response time-frame of the preset timeline; and
analyzing the response data for each evaluation index using an Artificial Intelligence (AI) analysis module corresponding to the evaluation index,
wherein the structured video content is configured using a scenario for eliciting the interaction based on a characteristic of the examinee and a protocol presented by a test tool,
wherein the response data includes visual and auditory information corresponding to at least one of eye contact, name-calling response, joint attention, imitation behavior, pointing gesture, social referencing, social smiling, and language behavior,
wherein the AI analysis module includes at least one of a gaze tracking module, a facial expression recognition module, a head pose estimation module, and a plurality of evaluation-specific analysis modules corresponding to respective evaluation indices, each configured to detect a response of the examinee during the corresponding response timeframe,
further comprising:
encoding data including the preset timeline and the evaluation indices of the collected response data;
storing the encoded response data in a storage device or transmitting the encoded response data to another device;
decoding the encoded response data together with the preset timeline and the evaluation indices to perform structured AI analysis; and
selecting, for each evaluation index, an AI analysis module suited to an analysis purpose corresponding to the evaluation index, and sequentially or in parallel driving the selected AI analysis modules in consideration of limited computing resources.
All of the foregoing underlined elements identified above amount to the abstract idea grouping of a certain method of organizing human activity because they amount to managing personal behavior or interactions between people (including social activities, teaching, and following rules or instructions) by implementing a diagnostic test of an examinee by providing a stimulus, collecting information, analyzing the collected information, and outputting the results of the collection and analysis. Additionally, the collection, analysis, and output of the results are interpreted as a series of steps that could reasonably be performed by mental processes with the aid of pen and paper because the claims, under their broadest reasonable interpretation, cover performance of the limitations in the mind but for the recitation of generic computer components. See MPEP 2106.04(a)(2)(III)(C) - A Claim That Requires a Computer May Still Recite a Mental Process. Similarly, merely reciting an ancillary intended use that indefinite “limited computing resources” are considered is not a technological feature, but rather at least part of the abstract idea grouping of mental processes as considering resources to implement a process is a routine part of process planning historically performed by in the mind of a human. Lastly, the performance of AI analysis amounts to the abstract idea grouping of mathematical concepts because they recite mathematical calculations as defined in MPEP 2106.04(a)(2)(I) which recites that a “claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the ‘mathematical concepts’ grouping” because a “mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word ‘calculating’ in order to be considered a mathematical calculation. For example, a step of ‘determining’ a variable or number using mathematical methods or ‘performing’ a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation. "
The dependent claims amount to merely further defining the judicial exception.
Therefore, the claims recite a judicial exception. (STEP 2A, PRONG 1: YES).
Step 2A, Prong 2
This judicial exception is not integrated into a practical application because the independent and dependent claims do not include additional elements that are sufficient to integrate the exception into a practical application under the considerations set forth in MPEP 2106.04(d). The elements of the claims above that are not underlined constitute additional elements.
The following additional elements, both individually and as a whole, merely generally link the judicial exception to a particular technological environment or field of use: a non-descript artificial intelligence analysis apparatus comprising one or more processors and a memory configured to store at least one program that is executed by the one or more processors (claim 1), reciting the structured content as “structured video content” (claims 1 and 7), a camera (claims 1 and 7), a microphone (claims 1 and 7), a non-descript Artificial Intelligence (AI) analysis module (claims 1 and 7) that includes at least one of a list of modules that perform functions that a human traditionally performs, and reciting the method is a non-descript artificial intelligence analysis method that is performed by a non-descript artificial intelligence analysis apparatus (claim 7), a storage device (claims 1 and 7), and transmitting the encoded response data to another device (claims 1 and 7). This is evidenced by the manner in which these elements are disclosed. For example, the drawings illustrate the elements as non-descript black boxes or stock images with Fig. 2-5 indicating that the claimed invention is purely software, while para. 34-39, 41, 44-46, 54-77, and 101-103 of the specification merely provide stock descriptions of generic computer hardware and software components in any generic arrangement. Furthermore, this also evidences that the computer components are merely an attempt to link the abstract idea to a particular technological environment, but do not result in an improvement to the technology or computer functions employed. It should be noted that because the courts have made it clear that the mere physicality or tangibility of an additional element or elements is not a relevant consideration in the eligibility analysis, the physical nature of the computing device and associated hardware does not affect this analysis. See MPEP 2106.05(I) for more information on this point, including explanations from judicial decisions including Alice Corp. Pty Ltd. v. CLS Bank Int’l, 573 US 208, 224-26 (2014). Similarly, the courts have identified that mere storing and transmitting data by use of conventional or generic technology in a nascent but well-known environment is not sufficient show an improvement in computer functionality. The claims do not recite any specific rules with specific characteristics that improve the functionality of the computer system. This is exemplified by para. 46, 76, and 97 which merely recite the intended use that the non-descript AI analysis modules “correspond to” - “gaze tracking for eye contact detection, facial expression recognition for social smiling detection, and head pose estimation for name-calling response detection”. In particular, any asserted AI analysis module merely acts to link the judicial exception to a technological environment, mainly implementation by a computer, and is not directed towards any improvement in computer functionality. Thus, none of the hardware offer a meaningful limitation beyond generally linking the performance of the steps to a particular technological environment, that is, implementation via computers. Additionally, the claims do not apply or use a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition nor do they apply or use a judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claims are directed to the judicial exception. (STEP 2A, PRONG 2: NO).
Step 2B
The independent and dependent claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception under the considerations set forth in MPEP 2106.05. As identified in Step 2A, Prong 2, above, the claimed process does not require the use of a particular machine, nor does it result in the transformation of an article. This is at least evidenced by the manner in which this is disclosed that indicates that the additional elements are sufficiently well-known that the specification does not need to describe the particulars of such additional elements to satisfy 35 USC 112(a) as identified in Step 2A, Prong 2, above. Furthermore, as identified in Step 2A, Prong 2 above, the computer components are merely an attempt to link the abstract idea to a particular technological environment, but do not result in an improvement to the technology or computer functions employed. Similarly, the courts have identified that mere storing and transmitting data by use of conventional or generic technology in a nascent but well-known environment amounts to well-understood, routine, and conventional activity. The claims do not recite any specific rules with specific characteristics that improve the functionality of the computer system. None of the hardware offer a meaningful limitation beyond generally linking the performance of the steps to a particular technological environment, that is, implementation via computers. Again, this is evidenced by the manner in which these elements are disclosed as identified above. Viewed as a whole, these additional claim elements do not provide meaningful limitation to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea of itself (STEP 2B: NO).
Therefore, the claims are rejected under 35 USC 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 102
The text of those sections of Title 35, U.S. Code 102 not included in this action can be found in a prior Office action.
Claims 1, 3-7, and 9-12 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kibar (US 2004/0210159).
Regarding claims 1 and 7, Kibar teaches an interaction-based artificial intelligence analysis apparatus, comprising: one or more processors; and a memory configured to store at least one program that is executed by the one or more processors (Kibar, para. 22, “non-contact software technologies such as artificial intelligence or content analysis software”) (claim 1) and an interaction-based artificial intelligence analysis method performed by an interaction-based artificial intelligence analysis apparatus (Kibar, para. 22, “non-contact software technologies such as artificial intelligence or content analysis software”) (claim 7), comprising:
outputting structured video content for each evaluation index so as to elicit an interaction to an examinee at each stimulus time-frame of a preset timeline (Kibar, para. 32, “The content, amount, and timing of the video and image information and the audio information can be pre-selected to provide predetermined stimuli to the subject over a period of time in a manner that will elicit responses by the subject that are measured by the three cameras and the microphone.”);
collecting response data of the examinee for each evaluation index through a camera and a microphone at each response time-frame of the preset timeline (Kibar, para. 32, “The content, amount, and timing of the video and image information and the audio information can be pre-selected to provide predetermined stimuli to the subject over a period of time in a manner that will elicit responses by the subject that are measured by the three cameras and the microphone.”); and
analyzing the response data for each evaluation index using an Artificial Intelligence (AI) analysis module corresponding to the evaluation index (Kibar, para. 34, “The psychology analysis software may use a variety of known techniques, including computer science, neural network, fuzzy logic, or artificial intelligence approaches, to derive the hypotheses or conclusions.”),
wherein the structured video content is configured using a scenario for eliciting the interaction based on a characteristic of the examinee and a protocol presented by a test tool (Kibar, para. 32, “The selection of the stimuli may be pre-determined or may be selected by an operator of the system, for example, a psychologist based on the psychologist's judgment of stimuli that would be especially useful in eliciting responses that can be analyzed.” Para. 41, “The specific rules for operation of the psychology analysis software may be entered by one or more expert psychologists based on their knowledge of the field or based on specific tests of subjects using particular stimuli and observing the responses of the subjects. These rules for operation can also be updated in real time based on prior or current information.” Para. 55, “stimuli 72 that are selected and controlled to be relevant to a psychological analysis that is to be conducted”),
wherein the response data includes visual and auditory information corresponding to at least one of eye contact, name-calling response, joint attention, imitation behavior, pointing gesture, social referencing, social smiling, and language behavior (Kibar, para. 14, “The responses include changes in the subject's face. The responses include changes in the subject's voice. The responses include changes in the subject's posture. The responses include changes in the content of a subject's speech. The responses include changes in the content of a subject's writings. The responses are also recorded before or after the performance of the multimedia work. The interpreting takes account of delays between responses in different modes of expression. The interpreting takes account of differing weights of contributions of responses in different modes of expression to determine a state. The interpreting includes comparison of the integrated responses to a norm.”),
wherein the AI analysis module includes at least one of a gaze tracking module, a facial expression recognition module, a head pose estimation module, and a plurality of evaluation-specific analysis modules corresponding to respective evaluation indices, each configured to detect a response of the examinee during the corresponding response timeframe (Kibar, para. 29, “The software 56 processes the images to produce information (content) about the position, orientation, motion, and state of the head, body, face, and eyes of the subject. For example, the video processing software may include conventional routines that use the video data to track the position, motion, and orientation of the subject's head (head tracking software), the subject's body (gait analysis software), the subject's face (facial expression analysis software), and the subject's eyes (eye tracking software). The video processing software may also include conventional thermal image processing that determines thermal profiles and changes in thermal profiles of the subject's face (facial heat imaging software).” Para. 33, “The audio and video control software also provides information about the timing and progress of the presented stimuli to psychology analysis software 60. The psychology analysis software can then match the stimuli with the response content being received from the image/video and audio processing and content analysis software. The psychology analysis software 60 uses the response content, the known timing of the stimuli, and known relationships between the stimuli and possible response content to provide psychological evaluations 62 of the subject.”),
further comprising:
encoding data including the preset timeline and the evaluation indices of the collected response data (Kibar, para. 29, “The software 56 processes the images to produce information (content) about the position, orientation, motion, and state of the head, body, face, and eyes of the subject. For example, the video processing software may include conventional routines that use the video data to track the position, motion, and orientation of the subject's head (head tracking software), the subject's body (gait analysis software), the subject's face (facial expression analysis software), and the subject's eyes (eye tracking software). The video processing software may also include conventional thermal image processing that determines thermal profiles and changes in thermal profiles of the subject's face (facial heat imaging software).” Para. 33, “The audio and video control software also provides information about the timing and progress of the presented stimuli to psychology analysis software 60. The psychology analysis software can then match the stimuli with the response content being received from the image/video and audio processing and content analysis software.” Processing the collected data before analysis by the computerized system is construed as encoding the data.);
storing the encoded response data in a storage device (Kibar, para. 56, “updating equipment (including software, databases, lookup tables, etc.).” Updating databases is construed as storing the encoded response data in a storage device because one of ordinary skill in the art routinely uses “database” as synonymous with “storage device”.) or transmitting the encoded response data to another device;
decoding the encoded response data together with the preset timeline and the evaluation indices to perform structured AI analysis (Kibar, para. 33, “The psychology analysis software 60 uses the response content, the known timing of the stimuli, and known relationships between the stimuli and possible response content to provide psychological evaluations 62 of the subject.”); and
selecting, for each evaluation index, an AI analysis module suited to an analysis purpose corresponding to the evaluation index, and sequentially or in parallel driving the selected AI analysis modules in consideration of limited computing resources (Kibar, para. 34, “The psychology analysis software may use a variety of known techniques, including computer science, neural network, fuzzy logic, or artificial intelligence approaches, to derive the hypotheses or conclusions. For example, the software may store rules that relate particular response content to psychological states. The software may analyze the received response content to infer categories of responses that are occurring, and then use the determined responses as the basis for triggering the stored rules.”).
Regarding claims 3 and 9, Kibar teaches the interaction-based artificial intelligence analysis apparatus of claim 1 and the interaction-based artificial intelligence analysis method of claim 7, wherein the timeline is set to a pair of a stimulus time-frame and a response time-frame for each evaluation index (Kibar, para. 35, “The movie, segment, or frame may be (but is not required to be) interactive, inviting the subject to speak or perform actions at predetermined times.”).
Regarding claims 4 and 10, Kibar teaches the interaction-based artificial intelligence analysis apparatus of claim 3 and the interaction-based artificial intelligence analysis method of claim 9, wherein the timeline is formed such that a size of a response collection time-frame corresponding to the stimulus time-frame is individually set for each evaluation index (Kibar, para. 35, “The movie, segment, or frame may be (but is not required to be) interactive, inviting the subject to speak or perform actions at predetermined times.”).
Regarding claims 5 and 11, Kibar teaches the interaction-based artificial intelligence analysis apparatus of claim 3 and the interaction-based artificial intelligence analysis method of claim 9, wherein the response time-frame is formed such that a limited time during which response data of the examinee is collected is individually set for each evaluation index (Kibar, para. 35, “The movie, segment, or frame may be (but is not required to be) interactive, inviting the subject to speak or perform actions at predetermined times.”).
Regarding claims 6 and 12, Kibar teaches the interaction-based artificial intelligence analysis apparatus of claim 1 and the interaction-based artificial intelligence analysis method of claim 7, further comprising:
outputting a result of analyzing the response data for each evaluation index (Kibar, para. 13, “presenting the evaluation results”; para. 92, “These results could be provided to the subjects directly or first interpreted by a professional.”).
Response to Arguments
Applicant's arguments with respect to the rejections of the claims under 35 USC 112(a) have been fully considered but they are not persuasive. Applicant asserts that the claims 1 and 7 have been amended to include new limitations that expressly recite concrete data-processing components and operations described in the specification. Here, Applicant asserts that para. 70-75 of the specification provide support for these new limitations.
Examiner is not persuaded. Applicant is directed to the rejections of the claims which have been updated to address the amendments to the claims, and which illustrate that para. 70-75 merely similar language as the new limitations without any meaningful description for the claimed functionality.
Applicant's arguments with respect to the rejections of the claims under 35 USC 101 have been fully considered but they are not persuasive. Applicant asserts that amended claims 1 and 7 are not directed to an abstract idea.
Examiner is not persuaded. Applicant is directed to the rejections of the claims which have been updated to address the amendments to the claims. It is noted that the newly added limitations are encompassed within the judicial exception. Furthermore, merely reciting an ancillary intended use that undisclosed “limited computing resources” are considered is not a technological feature, but rather at least part of the abstract idea grouping of mental processes as considering resources to implement a process is a routine part of process planning historically performed by a human.
Applicant's arguments with respect to the rejections of the claims under 35 USC 102 have been fully considered but they are not persuasive. Applicant asserts that Kibar does not teach the amended claims.
Examiner is not persuaded. Applicant is directed to the rejections which have been updated to address the amendments to the claims.
The rejections stand.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL LANE whose telephone number is (303)297-4311. The examiner can normally be reached Monday - Friday 8:00 - 4:30 MT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xuan Thai can be reached at (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL LANE/Examiner, Art Unit 3715