Prosecution Insights
Last updated: April 19, 2026
Application No. 17/854,905

ELECTRONIC APPARATUS AND CONTROL METHOD THEREOF

Final Rejection §101§103
Filed
Jun 30, 2022
Examiner
KRASNIC, BERNARD
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
403 granted / 518 resolved
+15.8% vs TC avg
Strong +57% interview lift
Without
With
+57.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
16 currently pending
Career history
534
Total Applications
across all art units

Statute-Specific Performance

§101
11.9%
-28.1% vs TC avg
§103
44.4%
+4.4% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
22.5%
-17.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 518 resolved cases

Office Action

§101 §103
DETAILED ACTION Response to Arguments The amendment filed 6/30/2025 have been entered and made of record. The Applicant has canceled claim(s) 6-7 and 16-17. The application has pending claim(s) 1-5, 8-15, and 18-20 (withdrawn claims 5, 8-10, 15, and 18-20 are withdrawn from further consideration). Applicant's arguments filed 6/30/2025 have been fully considered but they are not persuasive. The Applicant alleges, “III. Rejections …” in pages 8-10, and states respectively that Applicant respectfully submits that claim 1 is patent eligible under Prong Two of the revised Step 2A of the Alice text because the claim integrates the alleged judicial exception into a practical application and moreover amended independent claim 1 recites significantly more than any allegedly abstract idea. The Examiner disagrees because the Applicant has not provided any evidence in this “Section III. Rejections … Under 35 U.S.C. 101 …” arguments on why the Applicant believes the amended claim integrates the alleged judicial exception into a practical application or why the amended claim recites significantly more than any allegedly abstract idea. Further discussions are also discussed and addressed in the 35 U.S.C. 101 rejection section below. The Applicant alleges, “IV. Rejections …” in pages 10-14, and states respectively that Wu and Li fail to disclose or render obvious the presently claimed combination of features recited in amended claims 1 and 11 because Wu discloses the feature of obtaining relationship information based on user behavioral information or statistical information rather than imaging processing via a neural network model and because Li fails to disclose extracting visual features through image processing itself via a neural network model and obtaining relationship information based on the extracted features. The Examiner disagrees because the combination of Wu and Li does indeed disclose the broadest reasonable claim language interpretation of such a limitation. More specifically Wu discloses obtain relationship information between the plurality of persons (see Wu, Fig. 1, [0020]-[0024], acquire an image, recognize people, and obtain relation information according to the image data) and Li further discloses wherein the neural network model is a model trained to obtain the relationship information between the plurality of person images, based on at least one of an attire, pose or facial expression of the person included in the plurality of person images or object information included in an input image, if the plurality of person images are identified in the input image (see Li, Fig. 3, abstract, Section 3.1, Section 4, the trained neural network architecture to output relationship recognition [according to the person images and their extracted CNN visual features], Section 6.3, Section 6.8, Table 3: Dual-glance + all attributes, pre-trained based on clothing, head pose, face emotion, contextual objects, etc.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wu’s apparatus using Li’s teachings by including the cropped plurality of person images for each of Wu’s recognized people and by including the relationship recognition neural network architecture processing to Wu’s relation information processing in order to improve the relationship recognition (see Li, Fig. 3, abstract, Section 3.1, Section 6.3, Table 3: Dual-glance + all attributes). Further discussions are addressed in the prior art rejection section below. Therefore claims 1-4 and 11-14 are still not in condition for allowance because they are still not patentably distinguishable over the prior art references. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-4 and 11-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without integration into a practical application or recitation of significantly more. In the analysis below, the apparatus of independent claim 1 and similarly the method of independent claim 11 are directed to one of the four statutory categories of eligible subject matter; thus, the claims pass Step 1 of the Subject Matter Eligibility Test (See flowchart in MPEP 2106). Step 2A, prong 1 analysis The independent claims are directed to “input a captured image to obtain a plurality of person images included in the captured image and relationship information between the plurality of person images, obtain first identification information corresponding to a first person image and second identification information corresponding to a second person image, among the plurality of person images, based on the image information, obtain group information including the first identification information and the second identification information, based on the relationship information, if the captured image including both the first person image and the second person image is identified among the plurality of captured images … by a threshold number or more, and provide recommendation information related to the person corresponding to each of the first identification information and the second identification information among the plurality of persons, based on the obtained group information, wherein … obtain the relationship information between the plurality of person images, based on at least one of an attire, pose or facial expression of the person included in the plurality of person images or object information included in an input image, if the plurality of person images are identified in the input image”. Each of the above limitations of “input a captured image to obtain a plurality of person images included in the captured image and relationship information between the plurality of person images”, “obtain first identification information corresponding to a first person image and second identification information corresponding to a second person image, among the plurality of person images, based on the image information”, “obtain group information including the first identification information and the second identification information, based on the relationship information, if the captured image including both the first person image and the second person image is identified among the plurality of captured images … by a threshold number or more”, “provide recommendation information related to the person corresponding to each of the first identification information and the second identification information among the plurality of persons, based on the obtained group information”, and “wherein … obtain the relationship information between the plurality of person images, based on at least one of an attire, pose or facial expression of the person included in the plurality of person images or object information included in an input image, if the plurality of person images are identified in the input image” as drafted, are processes that, under broadest reasonable interpretation, covers the performance of the limitation in the human mind which falls within the “Mental Processes” grouping of abstract ideas. Additional elements The additional elements recited in independent claims 1 and 11 respectively are the elements of “electronic apparatus”, “a memory configured to store image information and identification information, corresponding to each of a plurality of persons”, “processor”, “a neural network model”, “the plurality of captured images stored in the memory”, “wherein the neural network model is a model trained …”, and control method of an electronic apparatus including image information and identification information, corresponding to each of a plurality of persons”. Step 2A, prong 2 analysis The above-identified additional elements do not integrate the judicial exception into a practical application. The step “store image information and identification information, corresponding to each of a plurality of persons”, “the plurality of captured images stored”, and “including image information and identification information, corresponding to each of a plurality of persons” merely constitutes activity involving data gathering. Such extra-solution activity does not integrate the abstract idea into a practical application. Please see MPEP §2106.05(g). The other additional elements “electronic apparatus”, “a memory”, “processor”, “a neural network model”, “the memory”, “wherein the neural network model is a model trained”, and a “control method of an electronic apparatus” amounts to merely using a computer as a tool to perform the claimed mental process. Implementing an abstract idea on a computer does not integrate a judicial exception into a practical application (See MPEP 2106.05(f)). Moreover, the additional elements of the claims do not recite an improvement in the functioning of a computer or other technology or technical field, the claimed steps are not performed using a particular machine, the claimed steps do not effect a transformation, and the claims do not apply the judicial exception in any meaningful way beyond generically linking the use of the judicial exception to a particular technological environment (See MPEP 2106.04(d)). Therefore, the analysis under prong two of step 2A of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106). Step 2B Finally, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As noted above, the step of “store image information and identification information, corresponding to each of a plurality of persons”, “the plurality of captured images stored”, and “including image information and identification information, corresponding to each of a plurality of persons” amounts to insignificant extra-solution activity. Such insignificant extra-solution activity does not constitute significantly more than the claimed data gathering (See MPEP 2106.05(g)). The other additional elements “electronic apparatus”, “a memory”, “processor”, “a neural network model”, “the memory”, “wherein the neural network model is a model trained”, and a “control method of an electronic apparatus” are generic computer features which perform generic computer functions that are well-understood, routine, and conventional and do not amount to more than implementing the abstract idea with a computerized system. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation, and mere implementation on a generic computer does not add significantly more to the claims. Accordingly, the analysis under step 2B of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106). For all of the foregoing reasons, independent claims 1 and 11 do not recite eligible subject matter under 35 USC 101. Regarding Dependent Claims 2-4 and 12-14: Claims 2-4 and 12-14 are dependent on corresponding independent claims 1 and 11 respectively and therefore include all the limitations of corresponding independent claims 1 and 11. Thus claims 2-4 and 12-14 recite “Mental Processes”. Further, claims 2-4 and 12-14 further describe: Dependent claim 2 [and similarly dependent claim 12] merely describes “wherein if an event including any one of the first identification information or the second identification information is generated in a specific application, based on a user command … provide the rest … as recommendation information for the event”. Dependent claim 3 [and similarly dependent claim 13] merely describes “wherein if a scheduling event including any one of the first identification information or the second identification information is generated in the specific application, based on the user command … provide the rest … as the recommendation information for the scheduling event”. Dependent claim 4 [and similarly dependent claim 14] merely describes “wherein if an event for transmitting a message to any one of the first identification information or the second identification information is generated in the specific application, based on the user command … provide the rest … as the recommendation information for sharing the message”. However, these limitations due to their broad generality are merely processes that, under broadest reasonable interpretation, covers the performance of the limitation in the human mind which falls within the same “Mental Processes” grouping of abstract ideas and it does not integrate the abstract idea into a practical application or add significantly more. Thus, claims 2-4 and 12-14 do not recite eligible subject matter under 35 USC 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4 and 11-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al (US 2014/0337344 A1, as applied in previous Office Action) in view of Li et al (“Visual Social Relationship Recognition” – International Journal of Computer Vision - February 3, 2020 – pages 1-15 - provided by Applicant’s Information Disclosure Statement IDS, as applied in previous Office Action). Re Claim 1: Wu discloses an electronic apparatus (see Wu, [0020]) comprising: a memory configured to store image information and identification information, corresponding to each of a plurality of persons (see Wu, Fig. 1, [0020], storage, [0028], stored and updated in the storage); and a processor (see Wu, [0020], computer implemented) configured to: input a captured image to obtain a plurality of persons included in the captured image and relationship information between the plurality of persons (see Wu, Fig. 1, [0020]-[0024], acquire an image, recognize people, and obtain relation information according to the image data), obtain first identification information corresponding to a first person and second identification information corresponding to a second person, among the plurality of persons, based on the image information (see Wu, Fig. 1, [0020]-[0024], recognized people in the image and determine each contact), obtain group information including the first identification information and the second identification information, based on the relationship information (see Wu, Fig. 1, [0025], [0027]-[0028], contact group according to the relation information), if the captured image including both the first person image and the second person image is identified among the plurality of captured images stored in the memory by a threshold number or more (see Wu, Fig. 1, [0024]-[0025], the contacts corresponding to the connecting relation whose score exceeds a predetermined value is able to form a group, [0027]-[0028], contact group according to the relation information), and provide recommendation information related to the person corresponding to each of the first identification information and the second identification information among the plurality of persons, based on the obtained group information (see Wu, Fig. 1, [0025], [0027]-[0028], contact group, and more specifically [0050]-[0053], recommendation and suggestion relating to the people according to the contact group). However Wu fails to explicitly disclose where Li discloses input a captured image to a neural network model to obtain a plurality of person images included in the captured image and relationship information between the plurality of person images (see Li, Fig. 3, abstract, Section 3.1, the neural network architecture obtains a plurality of person images p1, p2, etc. included in the image I and outputs relationship recognition between the people), obtain first identification information corresponding to a first person image and second identification information corresponding to a second person image, among the plurality of person images (see Li, Fig. 3, abstract, Section 3.1, Section 6.3, All Attributes), wherein the neural network model is a model trained to obtain the relationship information between the plurality of person images, based on at least one of an attire, pose or facial expression of the person included in the plurality of person images or object information included in an input image, if the plurality of person images are identified in the input image (see Li, Fig. 3, abstract, Section 3.1, Section 4, the trained neural network architecture to output relationship recognition according to the person images and their extracted CNN visual features, Section 6.3, Section 6.8, Table 3: Dual-glance + all attributes, pre-trained based on clothing, head pose, face emotion, contextual objects, etc.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wu’s apparatus using Li’s teachings by including the cropped plurality of person images for each of Wu’s recognized people and by including the relationship recognition neural network architecture processing to Wu’s relation information processing in order to improve the relationship recognition (see Li, Fig. 3, abstract, Section 3.1, Section 6.3, Table 3: Dual-glance + all attributes). Re Claim 2: Wu further discloses wherein if an event including any one of the first identification information or the second identification information is generated in a specific application, based on a user command, the processor is further configured to provide the rest of the first identification information or the second identification information as the recommendation information for the event (see Wu, Fig. 1, [0025], [0027]-[0028], contact group, and more specifically [0050]-[0053], selects Ken as the receiver [e.g. like planning an event in the calendar] and then can recommend and suggest Ted and Mary relating to the people according to the contact group as the receivers too). Re Claim 3: Wu further discloses wherein if a scheduling event including any one of the first identification information or the second identification information is generated in the specific application, based on the user command, the processor is further configured to provide the rest of the first identification information or the second identification information as the recommendation information for the scheduling event (see Wu, Fig. 1, [0025], [0027]-[0028], contact group, and more specifically [0050]-[0053], selects Ken as the receiver [e.g. like planning an event in the calendar] and then can recommend and suggest Ted and Mary relating to the people according to the contact group as the receivers too). Re Claim 4: Wu further discloses wherein if an event for transmitting a message to any one of the first identification information or the second identification information is generated in the specific application, based on the user command, the processor is further configured to provide the rest of the first identification information or the second identification information as the recommendation information for sharing the message (see Wu, Fig. 1, [0025], [0027]-[0028], contact group, and more specifically [0050]-[0053], selects Ken as the receiver [e.g. like an IM, email, chat room, etc.] and then can recommend and suggest Ted and Mary relating to the people according to the contact group as the receivers too). As to claim 11, the claim is the corresponding control method claim of claim 1 respectively. The discussions are addressed with regard to claim 1. As to claim 12, the discussions are addressed with regard to claim 2 respectively. As to claim 13, the discussions are addressed with regard to claim 3 respectively. As to claim 14, the discussions are addressed with regard to claim 4 respectively. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BERNARD KRASNIC whose telephone number is (571)270-1357. The examiner can normally be reached on Mon. - Thur. and every other Friday from 8am - 4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached on (571)272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Bernard Krasnic/Primary Examiner, Art Unit 2671 September 9, 2025
Read full office action

Prosecution Timeline

Jun 30, 2022
Application Filed
May 02, 2025
Non-Final Rejection — §101, §103
Jun 30, 2025
Response Filed
Sep 09, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596743
METHOD, APPARATUS, SYSTEM, AND NON-TRANSITORY COMPUTER READABLE MEDIUM FOR PERFORMING IMAGE SEARCH VERIFICATION USING AN ONLINE PLATFORM
2y 5m to grant Granted Apr 07, 2026
Patent 12591619
Content Descriptor
2y 5m to grant Granted Mar 31, 2026
Patent 12561992
SYSTEMS AND METHODS FOR FACILITATING CLONE SELECTION
2y 5m to grant Granted Feb 24, 2026
Patent 12561838
METHOD AND APPARATUS WITH SENSOR CALIBRATION
2y 5m to grant Granted Feb 24, 2026
Patent 12561364
Fool-Proofing Product Identification
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+57.1%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 518 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month