Prosecution Insights
Last updated: April 19, 2026
Application No. 18/348,109

ELECTRONIC APPARATUS AND METHOD FOR PROCESSING INPUT FROM STYLUS PEN IN ELECTRONIC APPARATUS

Final Rejection §103
Filed
Jul 06, 2023
Examiner
RIEGLER, PATRICK F
Art Unit
2171
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
4 (Final)
55%
Grant Probability
Moderate
5-6
OA Rounds
4y 5m
To Grant
89%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
189 granted / 346 resolved
At TC average
Strong +35% interview lift
Without
With
+34.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
36 currently pending
Career history
382
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
51.9%
+11.9% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
18.2%
-21.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 346 resolved cases

Office Action

§103
DETAILED ACTION This FINAL action is in response to Application No. 18/348,109 filed 7/6/2023 which claims priority from PCT/KR2022/000425 filed 1/11/2022 and KR10-2021-0004407 filed 1/13/2021. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The amendment presented on 9/30/2025 which provides amendments to claim 1, 9, and 17-19 is hereby acknowledged. Claims 1-4, 7-14, and 16-19 are currently pending. Claim Objections Claim 1 is objected to because of the following informalities: it appears “…delete the selected portion of characters” was intended. Appropriate correction is required. Response to Arguments Applicant's arguments with respect to claims 1-4, 7-14, and 16-19 have been considered, however, the amendment(s) to the claims necessitated a new consideration and search resulting in new prior art cited below. Claim Rejections - 35 USC § 103 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claim(s) 1, 3, 4, 7, 8, 16 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Markiewicz et al. (US 2012/0121181 A1, hereafter referred to as “Markiewicz”), in view of Missig et al. (US 2020/0356254 A1, from IDS received 7/6/2023, hereafter referred to as “Missig”), in view of Kumar (US 2016/0379048 A1), and further in view of Shih et al. (US 2015/0009154 A1, hereinafter “Shih”). Regarding claim 1, Markiewicz teaches an electronic apparatus comprising: a display; at least one processor; and memory storing instructions that, when executed by the at least one processor individually or collectively, cause the electronic device: convert handwriting data being input into text data based on input of the handwriting data with a stylus pen being detected in a second input area comprising a first input area in a first state of waiting for input of handwriting data and switch to a second state in which the converted text data is displayed as preview data in the first input area while displaying the handwriting data being input in the second input area…; More specifically, at least Figures 2A-2C depict the conversion of handwritten input 206 from a stylus into text 208 for input fields. Figure 3 logically depicts the first input area 202 that receives the converted text and second input area 302 or 304 for receiving the handwritten input. Referring again to Figures 2A-2C, after strokes are received, a preview of the recognized text (“writing”) 208 is displayed in the first input area simultaneously with the handwritten input (Markiewicz, [0048], Figure 2B is equivalent to the claimed second state). Figure 2C depicts the claimed third state where “writing” 208 has completely replaced the handwritten input (Markiewicz, [0049]). However, Markiewicz may not explicitly teach every aspect of wherein the preview data is provided as a preview for a user before being determined as input data; determine the text data displayed as the preview data as input data and switch to a third state in which the converted text data is displayed in the first input area based on termination of the input of the handwriting data being detected; and in the third state, when detecting a selection of a portion of characters among letters included in the text data displayed in the first input area by moving the stylus pen from left to right …, and subsequently detecting a deletion gesture on the selected portion of characters…delete the selected portion of character. Missig discloses the converting of handwritten input to text data as depicted in at least Figures 6A-6YY. In Figure 6P, a user enters handwriting input 604-6 into a form using a stylus “Salem” (first state). In Figure 6Q, a preview 606 is displayed that is recommended text adjacent to the handwriting input (second state). In Figure 6R, a gesture selects the recommended text. In Figure 6S, the selected recommended text becomes entered into the form and the preview, handwriting, and gray input shading surrounding the handwriting in Figures 6Q and 6R or any expanded region (construed as the second input area), are removed (third state) (Missig, [0223]-[0226], [0229]). Stylus selection gestures can include swipes upward and/or downward (Missig, [0145])). The form fields in the user interfaces are construable as second input areas and they do get modified in certain situations (Missig, Figures 6J, 6K, 6Q, 6R, and 6U). After a threshold amount of time (e.g., 0.5 seconds, 1 second, 2 seconds, 3 seconds, 5 seconds) has passed since the user entered handwritten input 604-1 (e.g.,33 “1234”), the device 600 terminates handwritten input and determines that handwritten input 604-1 corresponds to the characters “1234”. In other words, device 600 analyzes handwritten input 604-1 and recognizes the user's writing as the characters “1234” (Missig, [0213]). Missig additionally suggests using a stylus to select/highlight text with a horizontal gesture 806 that passes through a portion of text followed by a deletion gesture, that includes a horizontal component, that deletes the selected/highlighted text (Missig, [0322]-[0323], [0334]-[0335]). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Markiewicz and Missig that a method for converting handwritten input into text data would include a preview of recommended/recognized text that is removed when selected for entry as well as that a horizontal gesture selects/highlights text and deletion gesture deletes the selected/highlighted text. With Markiewicz and Missig both disclosing converting handwritten input and into text data, and with Missig additionally disclosing offering a preview of recommended text for selection and entry into the user interface as well as that a horizontal gesture selects/highlights text and deletion gesture deletes the selected/highlighted text, one of ordinary skill in the art of implementing a converting handwritten input into text data would include a preview of recommended/recognized text that is removed when selected for entry as well as that a horizontal gesture selects/highlights text and deletion gesture deletes the selected/highlighted text in order to allow a user to verify accuracy of their handwritten input in a user interface and also utilize typical and intuitive gestures for performing common functions. One would therefore be motivated to combine these teachings as in doing so would create this method for converting handwritten input into text data. However, Markiewicz and Missig may not explicitly teach every aspect of [the deletion occurs] by repeatedly moving the stylus pen from left to right and right to left. Kumar discloses handwriting text processing (Kumar, abstract). The input implement can be a stylus (Kumar, [0010]). A writing gesture may include, for example, a strikethrough gesture such as an approximate horizontal line (e.g., approximately parallel to a writing plane for the text) through a character, through a word, etc. The strikethrough gesture may also include, for example, a double line, a meandering line, a scribble, and so on. In this case, the recognizer gesture executor 44 may ignore and/or erase a portion (e.g., a character, a word, etc.) of the handwritten original text 42 based on the strikethrough gesture (Kumar, [0023], that the horizontal gesture can include a double line or a scribble suggests a left to right and right to left gesture with a stylus). A writing gesture may include an overwrite gesture such as a character or a word overwritten by another character or a word (e.g., a character with a heavier weight, width, etc.). In this case, the recognizer gesture executor 44 may replace a portion (e.g., a character, a word, etc.) of the handwritten original text 42 based on the overwrite gesture (Kumar, [0025], [0037]). Kumar additionally describes adding functionality to a stylus with a button (Kumar, [0035]). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Markiewicz and Missig with Kumar that a method for handwritten input with a stylus would include typical editing gestures with a stylus such as left to right and right to left input over a portion for deletion. With Markiewicz, Missig, and Kumar disclosing handwritten input with a stylus including editing gestures, and with Kumar additionally disclosing typical editing gestures with a stylus such as left to right and right to left input over a portion for deletion, one of ordinary skill in the art of implementing a method for handwritten input with a stylus would include typical editing gestures with a stylus such as left to right and right to left input over a portion for deletion in order to allow a user to utilize typical stylus gestures for editing textual input. One would therefore be motivated to combine these teachings as in doing so would create this method for converting handwritten input and into text data. However, Markiewicz, Missig, and Kumar may not explicitly teach every aspect of [the selection occurs] with a button of the stylus pen pressed; [the deletion occurs] with the button of the stylus pen released. Shih discloses interacting with a stylus, wherein the stylus transmits a control signal to the electronic device (Shih, abstract). A user selecting text content requires a button being pressed on the stylus (Shih, [0033]). Sliding the stylus along the selected text performs a delete gesture (Shih, [0039], there is no requirement of the button press for this delete gesture). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Markiewicz, Missig, and Kumar with Shih that a method for handwritten input with a stylus with selection and deletion gestures would include using a button with the selection and not using the button for deletion. With Markiewicz, Missig, Kumar, with Shih disclosing handwritten input with a stylus including editing gestures, with Missig, Kumar and Shih disclosing gestures for selecting and deleting content, and with Shih additionally disclosing using a button press on the stylus for content selection and not using the button press during content deletion, one of ordinary skill in the art of implementing a method for handwritten input with a stylus with selection and deletion gestures would include using a button with the selection and not using the button for deletion in order to provide an easy way to distinguish the selection from deletion functions because the stylus gestures for each are similar. One would therefore be motivated to combine these teachings as in doing so would create this method for converting handwritten input and into text data. Regarding claim 3, Markiewicz, Missig, and Kumar with Shih teach the electronic apparatus of claim 1, wherein the instructions, when executed by the at least one processor, individually or collectively, cause the electronic device to: based on the handwriting data being input in the second area being out of the second input area in the second state, recognize the handwriting data which is out of the second input area as handwriting data input in the second input area. More specifically, if the handwriting is performed a threshold area around the boundary of a text entry field, then the handwritten input is still interpreted as a request to enter text within the respective text entry field. In some embodiments, text entry fields have a margin of error or tolerance such that handwritten input that is slightly outside of the literal boundary of the text entry field (e.g., 1 mm, 2 mm, 3 mm, 5 mm, 3 points, 6 points, 12 points, etc.) will still be considered to be a request to input text within the respective text entry field (Missig, [0210]). Figures 6J, 6K, 6U and 6V depict the enlarging of a first input area to a second input area for receiving handwritten input. Regarding claim 4, Markiewicz, Missig, and Kumar with Shih teach the electronic apparatus of claim 1, wherein the instructions, when executed by the at least one processor, individually or collectively, cause the electronic device to: provide a toolbar comprising functions available via the stylus pen at a position adjacent to the second input area. More specifically, at least Figures 10-14 depict several tools around the input area (Markiewicz, [0079]-[0080]). Regarding claim 7, Markiewicz, Missig, and Kumar with Shih teach the electronic apparatus of claim 1, wherein the instructions, when executed by the at least one processor, individually or collectively, cause the electronic device to: based on an input from the stylus pen not being detected for a specified time in the second state, based on a touch input from the stylus pen being detected in an area other than the second input area, or based on a touch input from a first object being detected in the second input area, the processor is configured to detect termination of the input of the handwriting data and switch to the third state. More specifically, after a threshold amount of time (e.g., 0.5 seconds, 1 second, 2 seconds, 3 seconds, 5 seconds) has passed since the user entered handwritten input 604-1 (e.g.,33 “1234”), the device 600 terminates handwritten input and determines that handwritten input 604-1 corresponds to the characters “1234”. In other words, device 600 analyzes handwritten input 604-1 and recognizes the user's writing as the characters “1234” (Missig, [0213]). Regarding claim 8, Markiewicz, Missig, and Kumar with Shih teach the electronic apparatus of claim 1, wherein the instructions, when executed by the at least one processor, individually or collectively, cause the electronic device to: based on detecting one of a plurality of different types of editing gesture for the text data displayed in the first input area using the stylus pen in the third state, perform an editing function for the text data, based on a type of the detected editing gesture, wherein the plurality of different types of gestures indicate deleting a character, deleting multiple successive characters, inserting a character, and deleting a space between characters. More specifically, as shown in FIG. 8, when the pen is hovered over the word box for the word "new," a word box border 802 is provided and the text color for the word "new" is changed. Additionally, a clear button 804 is displayed over the word box. If the user selects the clear button 804, the corresponding word is removed (Markiewicz, [0074]). Additionally, Missig discloses gestures for deleting characters and words (Missig, [0335], [0362]) inserting characters (Missig, [0411] and [0431]) deleting space (Missig, [0432]). Regarding claim 16, Markiewicz, Missig, and Kumar with Shih teach the electronic apparatus of claim 1, wherein the instructions, when executed by the at least one processor, individually or collectively, cause the electronic device to: in the second state, control the display to display text data in letters based on the handwriting data as the recommended text data. More specifically, at least Figure 2B depicts a second state where converted letters are shown in a preview 208 while the handwritten input 206 is still visible (Markiewicz, Figure 2B). Additionally, Figures 6J, 6K, 6U and 6V depict the enlarging of a first input area to a second input area for receiving handwritten input. Figures 6P, 6Q, 6R and 6S depict the display of recommended text “Salem” adjacent to the handwritten input area, and the selection with the stylus of “Salem” to use as text data input (Missig, [0224]-[0226]). Regarding claim 17, Markiewicz, Missig, and Kumar with Shih teach the electronic apparatus of claim 1, wherein the instructions, when executed by the at least one processor, individually or collectively, cause the electronic device to: in the third state, control the display to cancel display of the second input area. More specifically, FIG. 6T-6W illustrate an embodiment in which a text entry field extends its boundaries to provide for a more comfortable or natural writing position based on the location of the text entry field on the display. After completing writing handwritten input, and converted into font-based text and text entry field returns to its original size and shape, as shown in FIG. 6W (Missig, [0228]-[0229]). Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Markiewicz, Missig, and Kumar with Shih, and further in view of Chambers et al. (US 2003/0214531 A1, hereafter referred to as “Chambers”). Regarding claim 2, Markiewicz, Missig, and Kumar with Shih teach the electronic apparatus of claim 1, including the expansion of an input area based on needed space of current conditions (Missig, at least Figures 6J, 6K, and 6U), however, Markiewicz and Missig may not explicitly teach every aspect wherein the instructions, when executed by the at least one processor, individually or collectively, cause the electronic device to: execute a first application, identify the first input area provided by the first application, configure an area obtained by extending the first input area by a specified area as the second input area, and switch to the first state of displaying the second input area and waiting for the input of the handwriting data based on an event indicating a hovering input or a touch input being received from the stylus pen. Chambers describes recognizing handwriting input where the (first) input region 1002/1009 can be expanded into a second input region 1010 in order to receive handwritten input. The region is expanded when a stylus is detected to be hovering over region 1002 (Chambers, [0081], [0089]). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Markiewicz, Missig, Kumar, and Shih with Chambers that a method for converting handwritten input and into text data would include expanding an input area for handwritten input when a stylus is hovering over the input area. With Markiewicz, Missig, and Chambers disclosing converting handwritten input and into text data as well a second input area for receiving the handwritten input, distinct from a first input area that receives text data, and with Chambers additionally disclosing the second input area for handwritten input is an expansion of the first input area when a stylus is hovering over the first input area, one of ordinary skill in the art of implementing a method for converting handwritten input and into text data would include expanding an input area for handwritten input when a stylus is hovering over the input area in order to provide more room for handwritten input which is typically too large to fit in text boxes on a user interface. One would therefore be motivated to combine these teachings as in doing so would create this method for converting handwritten input and into text data. Claim(s) 9, 11-14, 18 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Markiewicz and Missig, in view of Xia et al. (US 2014/0365949 A1, cited previously by Examiner on PTO-892 mailed 6/10/2024, hereafter referred to as “Xia”), and further in view of Waddel et al. (US 2018/0300542 A1, hereinafter “Waddel”). Regarding claim 9, Markiewicz teaches an electronic apparatus comprising: a display; at least one processor; and memory storing instructions that, when executed by the at least one processor individually or collectively, cause the electronic device to: convert handwriting data being input into text data based on input of the handwriting data with a stylus pen being detected in a second input area comprising a first input area in a first state of waiting for input of handwriting data and switch to a second state in which the converted text data is displayed as preview data in the first input area while displaying the handwriting data being input in the second input area, …; More specifically, at least Figures 2A-2C depict the conversion of handwritten input 206 from a stylus into text 208 for input fields. Figure 3 logically depicts the first input area 202 that receives the converted text and second input area 302 or 304 for receiving the handwritten input. Referring again to Figures 2A-2C, after strokes are received, a preview of the recognized text (“writing”) 208 is displayed in the first input area simultaneously with the handwritten input (Markiewicz, [0048], Figure 2B is equivalent to the claimed second state). Figure 2C depicts the claimed third state where “writing” 208 has completely replaced the handwritten input (Markiewicz, [0049]). However, Markiewicz may not explicitly teach every aspect of wherein the preview data is provided as a preview for a user before being determined as input data determine the text data displayed as the preview data as input data and switch to a third state in which the converted text data is displayed in the first input area based on termination of the input of the handwriting data being detected; and Missig discloses the converting of handwritten input to text data as depicted in at least Figures 6A-6YY. In Figure 6P, a user enters handwriting input 604-6 into a form using a stylus “Salem” (first state). In Figure 6Q, a preview 606 is displayed that is recommended text adjacent to the handwriting input (second state). In Figure 6R, a gesture selects the recommended text. In Figure 6S, the selected recommended text becomes entered into the form and the preview, handwriting, and gray input shading surrounding the handwriting in Figures 6Q and 6R or any expanded region (construed as the second input area), are removed (third state) (Missig, [0223]-[0226], [0229]). Stylus selection gestures can include swipes upward and/or downward (Missig, [0145])). The form fields in the user interfaces are construable as second input areas and they do get modified in certain situations (Missig, Figures 6J, 6K, 6Q, 6R, and 6U). After a threshold amount of time (e.g., 0.5 seconds, 1 second, 2 seconds, 3 seconds, 5 seconds) has passed since the user entered handwritten input 604-1 (e.g.,33 “1234”), the device 600 terminates handwritten input and determines that handwritten input 604-1 corresponds to the characters “1234”. In other words, device 600 analyzes handwritten input 604-1 and recognizes the user's writing as the characters “1234” (Missig, [0213]). Missig discloses there are options for inputting emojis (Missig, [0835]). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Markiewicz and Missig that a method for converting handwritten input into text data would include a preview of recommended/recognized text that is removed when selected for entry. With Markiewicz and Missig both disclosing converting handwritten input and into text data, and with Missig additionally disclosing offering a preview of recommended text for selection and entry into the user interface, one of ordinary skill in the art of implementing a method for converting handwritten input into text data would include a preview of recommended/recognized text that is removed when selected for entry in order to allow a user to verify accuracy of their handwritten input in a user interface. One would therefore be motivated to combine these teachings as in doing so would create this method for converting handwritten input into text data. However, Markiewicz and Missig may not explicitly teach every aspect of in the third state, when the handwriting data input in the second state is detected as emoji data, detect first emoji data corresponding to the emoji data drawn by the handwriting data among a plurality of pieces of emoji data stored in the electronic device and display the first emoji data in the first input area. Xia discloses handwriting input functionality on a user device (Xia, abstract). The input device can be a stylus (Xia, [0084]). The handwriting recognition method includes recognizing emoji characters from handwritten input and displaying the emoji characters (Xia, [0022]). The emoji characters are stored in a multi-script training corpus (Xia, [0164], at least Figures 8A and 13D). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Markiewicz and Missig with Xia that a method for converting handwritten input into text data would include having handwritten data converted to emojis and displayed when emojis are recognized within the handwriting. With Markiewicz, Missig, and Xia disclosing converting handwritten input and into text data, with Missig and Xia disclosing emojis can be included within input, and with Xia additionally disclosing handwritten data converted to emojis and displayed when emojis are recognized within the handwriting, one of ordinary skill in the art of implementing a method for converting handwritten input into text data would include having handwritten data converted to emojis and displayed when emojis are recognized within the handwriting in order to allow a user to utilize the popular activity of using emojis when entering text on devices. One would therefore be motivated to combine these teachings as in doing so would create this method for converting handwritten input into text data. However, Markiewicz and Missig with Xia may not explicitly teach every aspect of while displaying the first emoji data in the first input area, display an input window including a plurality of emoji data at a position adjacent to the first emoji data; and when detecting a selection of second emoji data using the stylus pen from the input window, display the second emoji data in the first input area instead of the first emoji data in the third state. Waddel discloses a method for enabling users to draw emojis for insertion into electronic text-based messages are disclosed. The system receives a handwritten drawing input from a user on a computing device. The handwritten drawing input is to represent an emoji for insertion into the message, and comprises a series of strokes input to the computing device by the user. The system analyzes the series of strokes and matches the analyzed series of strokes to at least one emoji in a set of emojis. The user can then select the at least one emoji for insertion into the message (Waddel, abstract). Input is with a stylus (Waddel, [0020]-[0021]). Figures 6B-6D display a an input area 630 that receives the handwritten drawing of an emoji, matching results for selection 640, and insertion into the composition (Waddel, [0068]-[0073]). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Markiewicz, Missig, and Xia with Waddel that a method for converting handwritten input into text data would include having handwritten data converted to emojis and displayed when emojis are recognized within the handwriting as well as displaying many alternative choices for selection. With Markiewicz, Missig, Xia, and Waddel disclosing converting handwritten input and into text data, with Missig, Xia, and Waddel disclosing emojis can be included within input, with Xia and Waddel disclosing handwritten data converted to emojis and displayed when emojis are recognized within the handwriting, and with Waddel additionally disclosing displaying a list of alternative choices of emojis that may match with the drawing for selection and insertion, one of ordinary skill in the art of implementing a method for converting handwritten input into text data would include having handwritten data converted to emojis and displayed when emojis are recognized within the handwriting as well as displaying many alternative choices for selection in order to allow a user to utilize the popular activity of using emojis when entering text on devices and providing alternative choices gives the system a better chance of recognizing the intentions of the user. One would therefore be motivated to combine these teachings as in doing so would create this method for converting handwritten input into text data. Regarding claim 11, Markiewicz, Missig, and Xia with Waddel teach the electronic apparatus of claim 9, wherein the instructions, when executed by the at least one processor, individually or collectively, cause the electronic device to: based on the handwriting data being input in the second area being out of the second input area in the second state, recognize the handwriting data which is out of the second input area as handwriting data input in the second input area. More specifically, if the handwriting is performed a threshold area around the boundary of a text entry field, then the handwritten input is still interpreted as a request to enter text within the respective text entry field. In some embodiments, text entry fields have a margin of error or tolerance such that handwritten input that is slightly outside of the literal boundary of the text entry field (e.g., 1 mm, 2 mm, 3 mm, 5 mm, 3 points, 6 points, 12 points, etc.) will still be considered to be a request to input text within the respective text entry field (Missig, [0210]). Figures 6J, 6K, 6U and 6V depict the enlarging of a first input area to a second input area for receiving handwritten input. Regarding claim 12, Markiewicz, Missig, and Xia with Waddel teach the electronic apparatus of claim 9, wherein the instructions, when executed by the at least one processor, individually or collectively, cause the electronic device to: provide a toolbar comprising functions available via the stylus pen at a position adjacent to the second input area. More specifically, at least Figures 10-14 depict several tools around the input area (Markiewicz, [0079]-[0080]). Regarding claim 13, Markiewicz, Missig, and Xia with Waddel teach the electronic apparatus of claim 9, wherein the instructions, when executed by the at least one processor, individually or collectively, cause the electronic device to: based on an input from the stylus pen not being detected for a specified time in the second state, based on a touch input from the stylus pen being detected in an area other than the second input area, or based on a touch input from a first object being detected in the second input area, detect termination of the input of the handwriting data and switch to the third state. More specifically, after a threshold amount of time (e.g., 0.5 seconds, 1 second, 2 seconds, 3 seconds, 5 seconds) has passed since the user entered handwritten input 604-1 (e.g.,33 “1234”), the device 600 terminates handwritten input and determines that handwritten input 604-1 corresponds to the characters “1234”. In other words, device 600 analyzes handwritten input 604-1 and recognizes the user's writing as the characters “1234” (Missig, [0213]). Regarding claim 14, Markiewicz, Missig, and Xia with Waddel teach the electronic apparatus of claim 9, wherein the instructions, when executed by the at least one processor, individually or collectively, cause the electronic device to: based on detecting one of a plurality of different types of editing gestures for the text data displayed in the first input area using the stylus pen in the third state, perform an editing function for the text data, based on a type of the detected editing gesture, wherein the plurality of different types of editing gestures indicate deleting a character, deleting multiple successive characters, inserting a character, and deleting a space between characters. More specifically, as shown in FIG. 8, when the pen is hovered over the word box for the word "new," a word box border 802 is provided and the text color for the word "new" is changed. Additionally, a clear button 804 is displayed over the word box. If the user selects the clear button 804, the corresponding word is removed (Markiewicz, [0074]). Additionally, Missig discloses gestures for deleting characters and words (Missig, [0335], [0362]) inserting characters (Missig, [0411] and [0431]) deleting space (Missig, [0432]). Regarding claim 18, Markiewicz, Missig, and Xia with Waddel teach the electronic apparatus of claim 9, wherein the instructions, when executed by the at least one processor, individually or collectively, cause the electronic device to: in the second state, control the display to display text data in letters based on the handwriting data as the recommended text data. More specifically, at least Figure 2B depicts a second state where converted letters are shown in a preview 208 while the handwritten input 206 is still visible (Markiewicz, Figure 2B). Additionally, Figures 6J, 6K, 6U and 6V depict the enlarging of a first input area to a second input area for receiving handwritten input. Figures 6P, 6Q, 6R and 6S depict the display of recommended text “Salem” adjacent to the handwritten input area, and the selection with the stylus of “Salem” to use as text data input (Missig, [0224]-[0226]). Regarding claim 19, Markiewicz, Missig, and Xia with Waddel teach the electronic apparatus of claim 9, wherein the instructions, when executed by the at least one processor, individually or collectively, cause the electronic device to: in the third state, control the display to cancel display of the second input area. More specifically, FIG. 6T-6W illustrate an embodiment in which a text entry field extends its boundaries to provide for a more comfortable or natural writing position based on the location of the text entry field on the display. After completing writing handwritten input, and converted into font-based text and text entry field returns to its original size and shape, as shown in FIG. 6W (Missig, [0228]-[0229]). Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Markiewicz, Missig, and Xia with Waddel, and further in view of Chambers et al. (US 2003/0214531 A1, hereafter referred to as “Chambers”). Regarding claim 10, Markiewicz, Missig, and Xia with Waddel teach the electronic apparatus of claim 1, including the expansion of an input area based on needed space of current conditions (Missig, at least Figures 6J, 6K, and 6U), however, Markiewicz, Missig, and Xia with Waddel may not explicitly teach every aspect the electronic apparatus of claim 9, wherein the instructions, when executed by the at least one processor, individually or collectively, cause the electronic device to: execute a first application, identify the first input area provided by the first application, configure an area obtained by extending the first input area by a specified area as the second input area, and switch to the first state of displaying the second input area and waiting for the input of the handwriting data based on an event indicating a hovering input or a touch input being received from the stylus pen. Chambers describes recognizing handwriting input where the (first) input region 1002/1009 can be expanded into a second input region 1010 in order to receive handwritten input. The region is expanded when a stylus is detected to be hovering over region 1002 (Chambers, [0081], [0089]). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Markiewicz, Missig, Xia, and Waddel with Chambers that a method for converting handwritten input and into text data would include expanding an input area for handwritten input when a stylus is hovering over the input area. With Markiewicz, Missig, Xia and Chambers disclosing converting handwritten input and into text data as well a second input area for receiving the handwritten input, distinct from a first input area that receives text data, and with Chambers additionally disclosing the second input area for handwritten input is an expansion of the first input area when a stylus is hovering over the first input area, one of ordinary skill in the art of implementing a method for converting handwritten input and into text data would include expanding an input area for handwritten input when a stylus is hovering over the input area in order to provide more room for handwritten input which is typically too large to fit in text boxes on a user interface. One would therefore be motivated to combine these teachings as in doing so would create this method for converting handwritten input and into text data. Pertinent Prior Art The prior art made of record on form PTO-892 and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. 1.111(c) to consider these references fully when responding to this action. Hicks (US 2014/0218343 A1) – gestures for selecting and deleting text with a stylus button. Large (US 2017/0052610 A1) – gestures for selecting and deleting text with a stylus button. Woolf (US 7,337,389 B1) – gestures for highlighting and deleting text with a stylus button. Song (US 2014/0210744 A1) – gestures for highlighting and deleting text with a stylus button. Kim (US 2017/0308289 A1) – drawing emojis results in a group of suggested matches for input. Chang (US 2021/0349627 A1) - handwriting recognition including a cross-out gesture, horizontal movement in two opposite directions being a delete command ([0660], [0675]). Rucine (US 2017/0153806 A1) - handwriting recognition including a strike-through gesture, horizontal delete command. Lee (US 2021/0042027 A1) – handwriting recognition with preview of text data. Perrin (US 2016/0179758 A1) – handwriting recognition with preview of text data. Kelso (US 2016/0179764 A1) – handwriting recognition with preview of text data. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PATRICK F RIEGLER whose telephone number is (571)270-3625. The examiner can normally be reached M-F 9:30am-6:00pm, ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached at (571) 272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PATRICK F RIEGLER/ Primary Examiner, Art Unit 2171
Read full office action

Prosecution Timeline

Jul 06, 2023
Application Filed
Jun 05, 2024
Non-Final Rejection — §103
Jul 30, 2024
Interview Requested
Aug 07, 2024
Examiner Interview Summary
Aug 07, 2024
Applicant Interview (Telephonic)
Sep 09, 2024
Response Filed
Dec 17, 2024
Final Rejection — §103
Mar 21, 2025
Request for Continued Examination
Mar 27, 2025
Response after Non-Final Action
Jul 01, 2025
Non-Final Rejection — §103
Sep 30, 2025
Response Filed
Nov 14, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547824
USER INTERFACE DATA ANALYZER HIGHLIGHTER
2y 5m to grant Granted Feb 10, 2026
Patent 12542869
Video Conference Apparatus, Video Conference Method and Computer Program Using a Spatial Virtual Reality Environment
2y 5m to grant Granted Feb 03, 2026
Patent 12535935
SYSTEMS AND METHODS FOR ANNOTATION PANELS
2y 5m to grant Granted Jan 27, 2026
Patent 12505140
AN INFORMATION INTERACTION VIA A MULTIMEDIA CONFERENCE
2y 5m to grant Granted Dec 23, 2025
Patent 12500984
NOTIFICATION SYSTEM NOTIFYING USER OF MESSAGE, CONTROL METHOD THEREFOR, AND STORAGE MEDIUM STORING CONTROL PROGRAM THEREFOR
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
55%
Grant Probability
89%
With Interview (+34.6%)
4y 5m
Median Time to Grant
High
PTA Risk
Based on 346 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month