Prosecution Insights
Last updated: April 19, 2026
Application No. 18/376,058

ELECTRONIC APPARATUS AND METHOD OF ACQUIRING TOUCH COORDINATES THEREOF

Non-Final OA §103
Filed
Oct 03, 2023
Examiner
SUBEDI, DEEPROSE D
Art Unit
2627
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
4 (Non-Final)
87%
Grant Probability
Favorable
4-5
OA Rounds
1y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
449 granted / 515 resolved
+25.2% vs TC avg
Moderate +14% lift
Without
With
+13.8%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 10m
Avg Prosecution
19 currently pending
Career history
534
Total Applications
across all art units

Statute-Specific Performance

§101
1.8%
-38.2% vs TC avg
§103
51.7%
+11.7% vs TC avg
§102
34.8%
-5.2% vs TC avg
§112
4.1%
-35.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 515 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . All the claims have been examined on the basis of the merit of the claims. Priority The present application is a continuation from PCT/KR2311374 filed 08/02/2023 which claims foreign priority benefits from KR1020220141497 filed on 10/28/2022 in Korea. The certified copy of the priority document was electronically retrieved on 11/06/2023. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/06/2026 has been entered. Response to Arguments Applicant’s arguments, see pages 11-12, filed 02/06/2026, with respect to the rejection(s) of claim(s) 1, 11 and 15 under 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of WEISHAUPT reference. Claim Objections The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS. —Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claim 22 which depends on claim 1 does not specify a further limitation of the subject matter claimed. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 5, 11, 15, 19 and 21-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Xiao et al., (US-20150242009-A1, hereinafter as, Xiao) in view of WEISHAUPT et al. (US-20120062474-A1, hereinafter as WEISHAUPT). In regards to claim 1, Xiao discloses an electronic apparatus (fig.1) comprising: a display including a capacitive type touch screen (display 120 with capacitive touch screen 110); a memory storing at least one instruction (memory 104, fig.1); and at least one processor configured to be connected with the display and the memory (processor 102 connected to the display 120 and the memory 104, fig. 1), and control the electronic apparatus (fig.1), wherein the at least one processor is configured to, by executing the at least one instruction: acquire an image including capacitive information corresponding to a touch input (fig.2, step 210 access capacitive image data), apply noise filtering to the acquired image by converting the image to a frequency domain image and removing noise from the frequency domain image to acquire a pre-processed image (Xiao removes noise at step 252 and performs conversion into frequency domain at 254 is an identical or equivalent subject matter. Applicant has not presented evidence that the order of performing these steps is patentably distinct or imparts distinctive structural characteristics to the final product from that of Xiao, see advisory action dated 01/20/2026, page 2), and input the pre-processed image into an artificial intelligence model configured to output a touch coordinate corresponding to a location of the touch input on the capacitive type touch screen based on touch state information and touch type information determined from the image, input the pre-processed image into the artificial intelligence model and acquire the touch coordinate information corresponding to the touch input (step 240-250 access vibro-acoustic data or other data produced by the touch event and determine touch type, para 0033-0034. Fig.2 comprising steps 250-258 is interpreted to an artificial intelligence model or equivalent subject matter thereof. Para 0050 describes the classification module 258 includes a neural network which can use different classifiers based on different features of the touch. For example, for classifying touch events with small contact areas, and another for classifying touch events with large contact areas. Therefore, Xiao at step 250 determines a touch type as well as a touch state information such as, but not limited to, vibro-acoustic data/state corresponding to the touches accessing a capacitive image data obtained from a capacitive touch panel). Xiao does not disclose “wherein the at least one processor is configured to: segment the image to which the noise filtering is applied into a plurality of areas, and acquire the pre-processed image including gray level information corresponding to the plurality of areas.” WEISHAUPT discloses wherein the at least one processor is configured to: segment the image to which the noise filtering is applied into a plurality of areas, and acquire the pre-processed image including gray level information corresponding to the plurality of areas (filtered image of step 1010 of pre-processed image of block 1000 inputted into first and second gray level filtering processes to remove sharp noises and filter strong perturbations respectively which is then used as an input (image 8000) of the segmentation plus analysis block 200, step 2010. Since WEISHAUPT discloses one or a plurality of regions is then identified in the processed image. In one preferred embodiment, a region is a connected group of pixels which can correspond to one touch, for example a group of connected pixels all having a value above a predetermined threshold. Therefore, when WEISHAUPT performs said gray level filtering the gray level information corresponding to the plurality of regions of touch is acquired in the image 8000. Applicant has not presented evidence that the order of performing these steps is patentably distinct or imparts distinctive structural characteristics to the final product). It would be obvious to one of ordinary skill in the art, before the effective filing of the invention, to use WEISHAUPT’s teachings Xiao’s invention to segment image of the capacitive touch comprising pixels having touch intensity and identifying regions of the segmented image; finding local maxima, each local maximum being of size one pixel on a sub-region inside each region; determining at least one touch position based on said local maxima. In regards to claim 11, Xiao discloses a method of acquiring touch coordinates of an electronic apparatus (fig.1) comprising a capacitive type touch screen (display 120 with capacitive touch screen 110), the method comprising: acquiring an image including capacitive information corresponding to a touch input (fig.2, step 210 access capacitive image data); applying noise filtering to the acquired image by converting the image to a frequency domain image and removing noise from the frequency domain image to acquire a pre-processed image (Xiao removes noise at step 252 and performs conversion into frequency domain at 254 is an identical or equivalent subject matter. Applicant has not presented evidence that the order of performing these steps is patentably distinct or imparts distinctive structural characteristics to the final product from that of Xiao, see advisory action dated 01/20/2026, page 2); and inputting the pre-processed image into an artificial intelligence model configured to output a touch coordinate corresponding to a location of the touch input on the capacitive type touch screen based on touch state information and touch type information determined from the image, and the acquiring the touch coordinate further comprises: inputting the pre-processed image into the artificial intelligence model and acquiring the touch coordinate information corresponding to the touch input. (Step 240-250 access vibro-acoustic data or other data produced by the touch event and determine touch type, para 0033-0034. Fig.2 comprising steps 250-258 is interpreted to an artificial intelligence model or equivalent subject matter thereof. Para 0050 describes the classification module 258 includes a neural network which can use different classifiers based on different features of the touch. For example, for classifying touch events with small contact areas, and another for classifying touch events with large contact areas. Therefore, Xiao at step 250 determines a touch type as well as a touch state information such as, but not limited to, vibro-acoustic data/state corresponding to the touches accessing a capacitive image data obtained from a capacitive touch panel). Xiao discloses “wherein the acquiring the pre-processed image further comprises: segmenting the image to which the noise filtering is applied into a plurality of areas, and acquiring the pre-processed image including gray level information corresponding to the plurality of areas, and the acquiring the touch coordinate.” WEISHAUPT discloses wherein the acquiring the pre-processed image further comprises: segmenting the image to which the noise filtering is applied into a plurality of areas, and acquiring the pre-processed image including gray level information corresponding to the plurality of areas, and the acquiring the touch coordinate (filtered image of step 1010 of pre-processed image of block 1000 inputted into first and second gray level filtering processes to remove sharp noises and filter strong perturbations respectively which is then used as an input (image 8000) of the segmentation plus analysis block 200, step 2010. Since WEISHAUPT discloses one or a plurality of regions is then identified in the processed image. In one preferred embodiment, a region is a connected group of pixels which can correspond to one touch, for example a group of connected pixels all having a value above a predetermined threshold. Therefore, when WEISHAUPT performs said gray level filtering the gray level information corresponding to the plurality of regions of touch is acquired in the image 8000. Applicant has not presented evidence that the order of performing these steps is patentably distinct or imparts distinctive structural characteristics to the final product). It would be obvious to one of ordinary skill in the art, before the effective filing of the invention, to use WEISHAUPT’s teachings Xiao’s invention to segment image of the capacitive touch comprising pixels having touch intensity and identifying regions of the segmented image; finding local maxima, each local maximum being of size one pixel on a sub-region inside each region; determining at least one touch position based on said local maxima. In regards to claim 15, Xiao discloses a non-transitory computer readable medium storing a computer instruction causing at least one processor of an electronic apparatus (fig.1, memory 104 and instructions 124 with processor 102) to execute[d, understood as typo, spelling correction is needed, no claim objection is set forth] a method comprising: acquiring an image including capacitive information corresponding to a touch input (fig.2, step 210 access capacitive image data); applying noise filtering to the acquired image by converting the image to a frequency domain image and removing noise from the frequency domain image to acquire a pre-processed image (Xiao removes noise at step 252 and performs conversion into frequency domain at 254 is an identical or equivalent subject matter. Applicant has not presented evidence that the order of performing these steps is patentably distinct or imparts distinctive structural characteristics to the final product from that of Xiao, see advisory action dated 01/20/2026, page 2); and inputting the pre-processed image into an artificial intelligence model configured to output a touch coordinate corresponding to a location of the touch input on the capacitive type touch screen based on touch state information and touch type information determined from the image, and wherein the acquiring the touch coordinate further comprises: inputting the pre-processed image into the artificial intelligence model and acquiring the touch coordinate information corresponding to the touch input (step 240-250 access vibro-acoustic data or other data produced by the touch event and determine touch type, para 0033-0034. Fig.2 comprising steps 250-258 is interpreted to an artificial intelligence model or equivalent subject matter thereof. Para 0050 describes the classification module 258 includes a neural network which can use different classifiers based on different features of the touch. For example, for classifying touch events with small contact areas, and another for classifying touch events with large contact areas. Therefore, Xiao at step 250 determines a touch type as well as a touch state information such as, but not limited to, vibro-acoustic data/state corresponding to the touches accessing a capacitive image data obtained from a capacitive touch panel). Xiao does not disclose “wherein the acquiring the pre-processed image further comprises: segmenting the image to which the noise filtering is applied into a plurality of areas, and acquiring the pre-processed image including gray level information corresponding to the plurality of areas.” WEISHAUPT discloses wherein the acquiring the pre-processed image further comprises: segmenting the image to which the noise filtering is applied into a plurality of areas, and acquiring the pre-processed image including gray level information corresponding to the plurality of areas (filtered image of step 1010 of pre-processed image of block 1000 inputted into first and second gray level filtering processes to remove sharp noises and filter strong perturbations respectively which is then used as an input (image 8000) of the segmentation plus analysis block 200, step 2010. Since WEISHAUPT discloses one or a plurality of regions is then identified in the processed image. In one preferred embodiment, a region is a connected group of pixels which can correspond to one touch, for example a group of connected pixels all having a value above a predetermined threshold. Therefore, when WEISHAUPT performs said gray level filtering the gray level information corresponding to the plurality of regions of touch is acquired in the image 8000. Applicant has not presented evidence that the order of performing these steps is patentably distinct or imparts distinctive structural characteristics to the final product). It would be obvious to one of ordinary skill in the art, before the effective filing of the invention, to use WEISHAUPT’s teachings Xiao’s invention to segment image of the capacitive touch comprising pixels having touch intensity and identifying regions of the segmented image; finding local maxima, each local maximum being of size one pixel on a sub-region inside each region; determining at least one touch position based on said local maxima. In regards to claims 5 and 19, Xiao as modified by WEISHAUPT discloses the electronic apparatus of claim 1, the non-transitory computer readable medium according to claim 15, wherein the artificial intelligence model (fig.2, Xiao) comprises: a first layer block configured to output touch state information (fig. 2, step 240, Xiao), a second layer block configured to output touch type information (fig.2, step 250, Xiao), and a third layer block configured to output a touch coordinate (performs action at 260 based on outputting a touch coordinate, fig.2, Xiao), and wherein the artificial intelligence model is configured such that an output of the first layer block is provided to the second layer block, and an output of the second layer block is provided to the third layer block (fig.2, Xiao). In regard to claim 21, Xiao as modified by WEISHAUPT discloses the electronic apparatus according to claim 1, therein the artificial intelligence model is a neural network model trained to output a touch coordinate (para 0050, fig.2, classification module 258 includes neural network, Xiao). In regard to claim 23, Xiao as modified by WEISHAUPT discloses the electronic apparatus of claim 1, wherein the artificial intelligence model (fig.2, Xiao) comprises: a first layer block configured to output touch state information (fig. 2, step 240, Xiao), a second layer block configured to output touch type information (fig.2, step 250, Xiao), and a third layer block configured to output a touch coordinate (performs action at 260 based on outputting a touch coordinate, fig.2, Xiao), wherein the first layer block, the second layer block, and the third layer block are each implemented as a convolution neural network (CNN) (fig.2, classification module 258 can include neural networks, Xiao). In regards to claims 22, Xiao as modified by discloses WEISHAUPT discloses the electronic apparatus of claim 1, wherein the at least one processor is configured to: segment the image to which the noise filtering is applied into a plurality of areas, and acquire the pre-processed image including gray level information corresponding to the plurality of areas, and input the pre-processed image into the artificial intelligence model and acquire the touch coordinate information corresponding to the touch input (filtered image of step 1010 of pre-processed image of block 1000 inputted into first and second gray level filtering processes to remove sharp noises and filter strong perturbations respectively which is then used as an input (image 8000) of the segmentation plus analysis block 200, step 2010. Since WEISHAUPT discloses one or a plurality of regions is then identified in the processed image. In one preferred embodiment, a region is a connected group of pixels which can correspond to one touch, for example a group of connected pixels all having a value above a predetermined threshold. Therefore, when WEISHAUPT performs said gray level filtering the gray level information corresponding to the plurality of regions of touch is acquired in the image 8000. Applicant has not presented evidence that the order of performing these steps is patentably distinct or imparts distinctive structural characteristics to the final product, WEISHAUPT). Allowable Subject Matter Claims 4, 6-10, 14, 18 and 20 are objected to as being dependent upon a rejected base claim but would be allowable if rewritten in independent form including all the limitations of the base claim and any intervening claims. In regards to claim 4, Xiao as modified by WEISHAUPT discloses the electronic apparatus of claim 1, Xiao as modified by WEISHAUPT does not disclose wherein the at least one processor is configured to: based on acquiring the image while the electronic apparatus is located in a first direction, apply noise filtering to the image and acquire the pre-processed image, and based on acquiring the image while the electronic apparatus is located in a second direction different from the first direction, apply noise filtering to the image and rotate the image to which the noise filtering is applied from the second direction to the first direction to acquire the pre-processed image. In regards to claim 6, Xiao as modified by WEISHAUPT discloses the electronic apparatus of claim 5, Xiao as modified by WEISHAUPT does not disclose wherein the first layer block is configured to output touch state information of at least one of a wet state or a dry state, and the second layer block is configured to output first type information of a finger touch or a non-finger touch, second type information of an entire finger touch, or a partial finger touch, or third type information of a thumb touch. In regards to claim 7, Xiao as modified by WEISHAUPT discloses the electronic apparatus of claim 5, Xiao as modified by WEISHAUPT does not disclose wherein the artificial intelligence model further comprises: a first operation block configured to concatenate the output of the first layer block and the output of the second layer block, and a second operation block configured to concatenate the output of the third layer block and the output of the first operation block, and wherein the touch coordinate is acquired based on an output of the second operation block. In regards to claim 8, Xiao as modified by WEISHAUPT discloses the electronic apparatus of claim 5, Xiao does not disclose wherein the first layer block, the second layer block, and the third layer block are each implemented as a convolution neural network (CNN), wherein the artificial intelligence model comprises: a first middle layer located between the first layer block and the second layer block, and a second middle layer located between the second layer block and the third layer block, and wherein the first middle layer and the second middle layer are implemented as a recurrent neural network (RNN). Claim 9 depends on claim 8. Claim 10 depends on claim 9. In regards to claim 14, Xiao as modified by WEISHAUPT discloses the method of acquiring touch coordinates of claim 11, Xiao does not disclose wherein the acquiring the pre-processed image comprises: based on acquiring the image while the electronic apparatus is located in a first direction, applying noise filtering to the image and acquiring the pre-processed image; and based on acquiring the image while the electronic apparatus is located in a second direction different from the first direction, applying noise filtering to the image and rotating the image to which the noise filtering is applied from the second direction to the first direction to acquire the pre-processed image. In regards to claim 18, Xiao as modified by WEISHAUPT discloses the non-transitory computer readable medium according to claim 15, Xiao as modified by WEISHAUPT does not disclose wherein the acquiring the pre-processed image comprises: based on acquiring the image while the electronic apparatus is located in a first direction, applying noise filtering to the image and acquiring the pre-processed image; and based on acquiring the image while the electronic apparatus is located in a second direction different from the first direction, applying noise filtering to the image and rotating the image to which the noise filtering is applied from the second direction to the first direction to acquire the pre-processed image. In regards to claim 20, Xiao as modified by WEISHAUPT discloses the non-transitory computer readable medium according to claim 19, Xiao as modified by WEISHAUPT does not disclose wherein the first layer block is configured to output touch state information of at least one of a wet state or a dry state, and the second layer block is configured to output first type information of a finger touch or a non-finger touch, second type information of an entire finger touch, or a partial finger touch, or third type information of a thumb touch. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEEPROSE SUBEDI whose telephone number is (571)270-7977. The examiner can normally be reached Monday-Friday, 8AM-5PM, EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KE XIAO can be reached at 571-272-7776. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DEEPROSE SUBEDI/Primary Examiner, Art Unit 2627
Read full office action

Prosecution Timeline

Oct 03, 2023
Application Filed
May 08, 2025
Non-Final Rejection — §103
Aug 14, 2025
Response Filed
Aug 19, 2025
Final Rejection — §103
Oct 22, 2025
Request for Continued Examination
Nov 01, 2025
Response after Non-Final Action
Nov 05, 2025
Final Rejection — §103
Jan 07, 2026
Response after Non-Final Action
Feb 06, 2026
Request for Continued Examination
Feb 17, 2026
Response after Non-Final Action
Feb 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602137
TOUCH DISPLAY PANEL
2y 5m to grant Granted Apr 14, 2026
Patent 12578830
TOUCH STRUCTURE, TOUCH DISPLAY PANEL, AND DISPLAY DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12572237
TOUCH DISPLAY DEVICE AND TOUCH DRIVING CIRCUIT
2y 5m to grant Granted Mar 10, 2026
Patent 12572247
ELECTRONIC DEVICE COMPRISING SUBSTRATE, CIRCUIT LAYER, AND LIGHT EMITTING ELEMENT LAYER ON THE CIRCUIT LAYER
2y 5m to grant Granted Mar 10, 2026
Patent 12561054
METHOD AND APPARATUS FOR UNLOCKING BASED ON USER INPUT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+13.8%)
1y 10m
Median Time to Grant
High
PTA Risk
Based on 515 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month