Prosecution Insights
Last updated: April 19, 2026
Application No. 18/689,589

Multidirectional Gesturing For On-Display Item Identification and/or Further Action Control

Non-Final OA §102§103
Filed
Mar 06, 2024
Examiner
NGUYEN, TUAN S
Art Unit
2179
Tech Center
2100 — Computer Architecture & Software
Assignee
Carnegie Mellon University
OA Round
1 (Non-Final)
65%
Grant Probability
Moderate
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 65% of resolved cases
65%
Career Allow Rate
206 granted / 318 resolved
+9.8% vs TC avg
Strong +38% interview lift
Without
With
+38.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
17 currently pending
Career history
335
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
54.6%
+14.6% vs TC avg
§102
17.2%
-22.8% vs TC avg
§112
10.8%
-29.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 318 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION The present invention application contains 30 claims. Claims 1 and 40 are independent. Claims 1-4, 7-11, 13, 15, 17-18, 20-28, 32 and 34-40 are examined and rejected by the following detail action. Examiner Notes The prior art rejections below cite particular paragraphs, columns, and/or line numbers in the references for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 26-27, 34, 37 and 40 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Moscovich et al. (“Moscovich”, US Patent 9256588 B1). Re-claim 1, Moscovich teaches a method of controlling a computing system via a visual display driven by the computing system, the method being performed by the computing system and comprising: monitoring input of a user so as to recognize when the user has formed an item-action gesture (Fig. 4, col. 6 lines [3-11]. Moscovich describes user performing a multi-line select gesture 404 to select a block of content (i.e. several lines or entire paragraphs of text) ); and in response to recognizing the item-action gesture without presence of another user-input action upon the computing system: identifying an on-display item, displayed on the visual display, that corresponds to the item-action gesture (Figs. 4, 7, col. 6 lines [3-11], col. 7 lines [20-36]. Moscovich describes a block of content, text or other format of content, or any contextual information (i.e. clip content 704) is selected by the multi-line select gesture 404); and manipulating the identified on-display item (Figs. 4, 7, col. 6 lines [3-11], col. 7 lines [20-36]. Moscovich describes the identified clip content 704 can be underlined, bracketed, drawn a box or circle around it, etc.). Re-claim 26, in addition to what Moscovich teaches the method in claim 1, Moscovich also teaches the method, wherein manipulating the identified on-display item includes capturing the identified on-display item (Fig. 7, col. 7 lines [20-36]. Moscovich describes “FIG. 7 depicts a user selecting clip content and creating a virtual notebook containing the clip content. Users may create virtual notebooks based at least in part on content presented on the device…”). Re-claim 27, in addition to what Moscovich teaches the method in claim 1, Moscovich also teaches the method, wherein capturing the on-display item includes copying one or more HTML objects from an HTML description (Fig. 7, col. 2 lines [33-40]. Moscovich describes “… The notebook may be associated with several pieces of content, or may be associated with a particular piece of content. For example, one virtual notebook may contain clippings from web pages, eBooks, and so forth…”). Re-claim 34, in addition to what Moscovich teaches the method in claim 1, Moscovich also teaches the method, wherein without presence of another user-input action upon the computing system includes without the presence of a user actuating a control of a human-machine-interface device (Fig. 4, col. 6 lines [6-11]. Moscovich describes “…the text of an eBook is presented with a multi-line select gesture 404 comprising a line drawn vertical relative to the orientation of text on the page. The user interface module 304 may be configured to recognize this multi-line select gesture 404 as an input to select the proximate text.”). Re-claim 37, in addition to what Moscovich teaches the method in claim 1, Moscovich also teaches the method, wherein when the user forms the item-action gesture, the on-display item was not already selected (Fig. 4, col. 6 lines [3-11]. Moscovich describes the block of text was not selected when user perform the multi-line select gesture 404). Re-claim 40, it is a computer-readable storage claim having similar limitations in scope of claim 1; therefore, it is rejected under similar rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-4, 20-21, 24-25, 28, 32, 35-36 and 38-39 are rejected under 35 U.S.C. 103 as being unpatentable over Moscovich in view of Christie (US PG-Pub. 2011/0239155 A1). Re-claim 2, Moscovich teaches the method of claim 1, but Moscovich fails to teach a method, wherein monitoring input of the user includes monitoring movement by the user of an onscreen cursor. However, Christie teaches: wherein monitoring input of the user includes monitoring movement by the user of an onscreen cursor ([0062], Christie describes “…performing actions based on the outputs that can include, but are not limited to, moving an object such as a cursor …”). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Christie to provide the visual gesture tracking feedback to ensure the intended user operation on display content. Re-claim 3, Moscovich teaches the method of claim 1, but Moscovich fails to teach a method, wherein monitoring input of the user includes monitoring scrolling performed by the user. However, Christie teaches: wherein monitoring input of the user includes monitoring scrolling performed by the user (Figs. 19A, 19B, [0125], Christie describes “…as shown in FIGS. 19A and 19B, a touch detection zone 754 may be dedicated to scrolling action whereby a gesture of an up and down movement of a finger on the displayed photo 752 of the touch screen 750 may be interpreted as a gestural input for scrolling to the next photo 753 …”). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Christie to provide the visual gesture tracking feedback to ensure the intended user operation on display content. Re-claim 4, in addition to what Moscovich teaches the method in claim 1, Moscovich also teaches the method, wherein the visual display comprises a touchscreen (Fig. 1, col. 2 lines [57-59]. Moscovich describes the touchscreen display 104). Moscovich fails to teach: wherein monitoring input of the user includes monitoring movement by the user of a pointer engaged with the touchscreen. However, Christie teaches: wherein monitoring input of the user includes monitoring movement by the user of a pointer engaged with the touchscreen (Fig. 7F, [0062], Christie describes the touchscreen input at block 710 and “…performing actions based on the outputs that can include, but are not limited to, moving an object such as a cursor or pointer …”). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Christie to provide the visual gesture tracking feedback to ensure the intended user operation on display content. Re-claim 20, Moscovich teaches the method of claim 1, but Moscovich fails to teach a method, wherein the item-action gesture includes an action extension, and the method further comprises: monitoring input of the user so as to recognize when the user has formed an action extension of the item-action gesture; and in response to recognizing the action extension, executing an action relative to the identified on-display item. However, Christie teaches: monitoring input of the user so as to recognize when the user has formed an action extension of the item-action gesture ([0068], Christie describes “…The initial parameter values may be based on set down, i.e., when the user sets their fingers on the touch screen, and the current values may be based on any point within a stroke occurring after set down”).; and in response to recognizing the action extension, executing an action relative to the identified on-display item ([0117], Christie describes “…the set down of the fingers will associate or lock the fingers to a particular image object displayed on the touch screen. Typically, when at least one of the fingers is positioned over the image on the image object, the image object will be associated with or locked to the fingers. As a result, when the fingers are rotated, the rotate signal can be used to rotate the object in the direction of finger rotation (e.g., clockwise, counterclockwise) …”).. Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Christie to provide the corresponding gesture action associated with selected displayed item. Re-claim 21, Moscovich-Christie teaches the method of claim 20, but Moscovich fails to teach a method, further comprising monitoring movement by the user of the screen pointer relative to the display screen so as to recognize directionality of the action extension and determining the action based on the directionality. However, Christie teaches: monitoring movement by the user of the screen pointer relative to the display screen so as to recognize directionality of the action extension and determining the action based on the directionality ([0117], Christie describes “…the set down of the fingers will associate or lock the fingers to a particular image object displayed on the touch screen. Typically, when at least one of the fingers is positioned over the image on the image object, the image object will be associated with or locked to the fingers. As a result, when the fingers are rotated, the rotate signal can be used to rotate the object in the direction of finger rotation (e.g., clockwise, counterclockwise) …”).. Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Christie to provide the corresponding gesture action associated with selected displayed item. Re-claim 24, Moscovich teaches the method of claim 1, but Moscovich fails to teach a method, wherein determining an on-display item corresponding to the item-action gesture includes mapping an on-display location of at least a portion of the item-action gesture to an on-display location of the on-display item. However, Christie teaches: wherein determining an on-display item corresponding to the item-action gesture includes mapping an on-display location of at least a portion of the item-action gesture to an on-display location of the on-display item (Fig. 13A-13C, [0055, 0112]. Christie describes in paragraph [0055] as “…a gesture can be defined as a stylized interaction with an input device that can be mapped to one or more specific computing operations …”; and in paragraph [0112] as “…FIG. 13A illustrates a display presenting an image object 364 in the form of a map of North America with embedded levels which can be zoomed. In some cases, as shown, the image object can be positioned inside a window that forms a boundary of the image object 364. FIG. 13B illustrates a user positioning their fingers 366 over a region of North America 368, particularly the United States 370 and more particularly California 372. In order to zoom in on California 372, the user starts to spread their fingers 366 apart as shown in FIG. 13C…”).. Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Christie to provide the corresponding gesture action associated with selected displayed item. Re-claim 25, Moscovich-Christie teaches the method of claim 24, but Moscovich fails to teach a method, wherein mapping the on-display location of at least a portion of the item-action gesture to an on-display location of the on-display item uses a cascading style sheet. However, Christie teaches: wherein mapping the on-display location of at least a portion of the item-action gesture to an on-display location of the on-display item uses a cascading style sheet (Fig. 7B, Christie describes the Cascade Style Sheets (CSS) is used in the figure 7B). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Christie to provide the corresponding gesture action associated with selected displayed item. Re-claim 28, Moscovich teaches the method of claim 1, but Moscovich fails to teach a method, wherein the item-action gesture comprises a reciprocating scrolling action. However, Christie teaches: wherein the item-action gesture comprises a reciprocating scrolling action (Figs. 19A-19B, 21D, [0125]. Christie describes the scrolling action 501 and 795 in Fig. 21D and in paragraph [0125] as “…Specifically, as shown in FIGS. 19A and 19B, a touch detection zone 754 may be dedicated to scrolling action whereby a gesture of an up and down movement of a finger on the displayed photo 752 of the touch screen 750 may be interpreted as a gestural input for scrolling to the next photo 753 …”).. Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Christie to provide the corresponding gesture action associated with selected displayed item. Re-claim 32, Moscovich-Christie teaches the method of claim 28, but Moscovich fails to teach a method, wherein the user actuates a scroll controller of a human-machine-interface device to effect the reciprocating scrolling action. However, Christie teaches: wherein the user actuates a scroll controller of a human-machine-interface device to effect the reciprocating scrolling action ([0125]. Christie describes “…a UI element can be displayed on the screen as a virtual vertical slide bar to indicate to the user that a scrolling action has been activated …”).. Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Christie to provide the corresponding gesture action associated with selected displayed item. Re-claim 35, Moscovich teaches the method of claim 1, but Moscovich fails to teach a method, wherein monitoring input of a user includes monitoring user input of a gesture, the method further comprising: treating the item-action gesture as having differing control segments; and performing differing actions for the differing control segments. However, Christie teaches: wherein monitoring input of a user includes monitoring user input of a gesture, the method further comprising: treating the item-action gesture as having differing control segments; and performing differing actions for the differing control segments (Fig. 7F, [0074]. Christie describes “…If the touch detected can be determined 711 to be one finger, then a determination 712 can be made of whether the touch is in a predetermined proximity of a displayed image object that is associated with a selectable file object, and if so, then a selection action is made 714. If a double tap action is detected 716 in association with a selectable object, then a double-click action can be invoked 718. A double tap action can be determined by the detection of a finger leaving the touch screen and immediately retouching the touch screen twice …”).. Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Christie to provide the corresponding gesture action associated with selected displayed item. Re-claim 36, Moscovich-Christie teaches the method of claim 35, but Moscovich fails to teach a method, wherein the differing control segments comprise a suspected-gesture segment and a confirmed-gesture segment. However, Christie teaches: wherein the differing control segments comprise a suspected-gesture segment and a confirmed-gesture segment (Fig. 7G, [0075]. Christie describes “As shown in FIG. 7G, if the one finger touch detected is not associated with a selectable file object, but rather is determined 720 to be associated with a network address hyperlink, then a single-click action can be invoked whereby the hyperlink can be activated. If the hyperlink was touched within a non-browser environment, then a browser application would also be launched”).. Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Christie to provide the corresponding gesture action associated with selected displayed item. Re-claim 38, Moscovich teaches the method of claim 1, but Moscovich fails to teach a method, wherein when the user forms the item-action gesture, the on-display item is in a selected state. However, Christie teaches: wherein when the user forms the item-action gesture, the on-display item is in a selected state (Fig. 7F, [0050]. Christie describes the select object state at step 715 before the action at step 719 and in paragraph [0050] as “…the user can select and/or activate various graphical images in order to initiate functions and tasks associated therewith …”). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Christie to provide the corresponding gesture action associated with selected displayed item. Re-claim 39, Moscovich teaches the method of claim 38, but Moscovich fails to teach a method, wherein manipulation of the identified on-display item does not change the selected state. However, Christie teaches: wherein manipulation of the identified on-display item does not change the selected state (Fig. 7F, [0050]. Christie describes the select object state at step 715 after the action at step 719 and in paragraph [0050] as “…The GUI 69 can additionally or alternatively display information, such as non-interactive text and graphics …”). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Christie to provide the corresponding gesture action associated with selected displayed item. Claims 7-11, 13, 15 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Moscovich in view of Rucine et al. (“Rucine”, US PG-Pub. 2017/0153806 A1). Re-claim 7, Moscovich teaches the method of claim 1, but Moscovich fails to teach a method, wherein the visual display has a display area, and the item-action gesture includes a multi-directional trajectory comprising multiple contiguous segments extending in differing directions relative to the display area. However, Rucine teaches: wherein the visual display has a display area, and the item-action gesture includes a multi-directional trajectory comprising multiple contiguous segments extending in differing directions relative to the display area (Fig. 12C, 15A, [0128]. Rucine describes “…gesture 1210 detected as input as a vertical zig-zag line in a single, multi-directional stroke by a user swiping their finger or stylus tip up and down several times in the displayed position shown …”). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Rucine to mark the selection zone for editing or manipulating the display content. Re-claim 8, Moscovich-Rucine teaches the method of claim 7, but Moscovich fails to teach a method, wherein the item-action gesture comprises a wiggling gesture. However, Rucine teaches: wherein the item-action gesture comprises a wiggling gesture (Fig. 12C, [0128]. Rucine describes “…gesture 1210 detected as input as a vertical zig-zag line in a single, multi-directional stroke by a user swiping their finger or stylus tip up and down several times in the displayed position shown …”). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Rucine to mark the selection zone for editing or manipulating the display content. Re-claim 9, Moscovich-Rucine teaches the method of claim 8, but Moscovich fails to teach a method, wherein the wiggling gesture includes a zig-zag trajectory. However, Rucine teaches: wherein the wiggling gesture includes a zig-zag trajectory (Fig. 12C, [0128]. Rucine describes “…gesture 1210 detected as input as a vertical zig-zag line in a single, multi-directional stroke by a user swiping their finger or stylus tip up and down several times in the displayed position shown …”). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Rucine to mark the selection zone for editing or manipulating the display content. Re-claim 10, Moscovich-Rucine teaches the method of claim 7, but Moscovich fails to teach a method, wherein the item-action gesture comprises a curvilinear shape. However, Rucine teaches: wherein the item-action gesture comprises a curvilinear shape (Fig. 18D, [0146]. Rucine describes “…gesture 1812 detected as input as a bottom-to-top curly line in a single stroke by a user swiping their finger or stylus tip from the bottom to the top in a spiral to the right …”). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Rucine to mark the selection zone for editing or manipulating the display content. Re-claim 11, Moscovich-Rucine teaches the method of claim 10, but Moscovich fails to teach a method, wherein the item-action gesture comprises a repetition of the curvilinear shape, and the repetition proceeds along a procession direction. However, Rucine teaches: wherein the item-action gesture comprises a repetition of the curvilinear shape, and the repetition proceeds along a procession direction (Fig. 18D, [0146]. Rucine describes “…gesture 1812 detected as input as a bottom-to-top curly line in a single stroke by a user swiping their finger or stylus tip from the bottom to the top in a spiral to the right …”). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Rucine to mark the selection zone for editing or manipulating the display content. Re-claim 13, Moscovich-Rucine teaches the method of claim 8, but Moscovich fails to teach a method, wherein the multi-directional gesture is primarily horizontal relative to the visual screen. However, Rucine teaches: wherein the multi-directional gesture is primarily horizontal relative to the visual screen (Fig. 12C, 15A, [0128]. Rucine describes “…gesture 1210 detected as input as a vertical zig-zag line in a single, multi-directional stroke by a user swiping their finger or stylus tip up and down several times in the displayed position shown …”. The zig-zag line is horizontal or vertical relative to the visual screen is just an input design choice and does not have any specific function associate with it.). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Rucine to mark the selection zone for editing or manipulating the display content. Re-claim 15, Moscovich-Rucine teaches the method of claim 7, but Moscovich fails to teach a method, wherein the multi-directional gesture is primarily vertical relative to the screen display. However, Rucine teaches: wherein the multi-directional gesture is primarily horizontal relative to the visual screen (Fig. 12C, 15A, [0128]. Rucine describes “…gesture 1210 detected as input as a vertical zig-zag line in a single, multi-directional stroke by a user swiping their finger or stylus tip up and down several times in the displayed position shown …”. The zig-zag line is horizontal or vertical relative to the visual screen is just an input design choice and does not have any specific function associate with it). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Rucine to mark the selection zone for editing or manipulating the display content. Re-claim 17, Moscovich-Rucine teaches the method of claim 7, but Moscovich fails to teach a method, wherein the item-action gesture comprises a plurality of segment-swipes, wherein each transition between adjacent segment swipes forms an abrupt angle. However, Rucine teaches: wherein the item-action gesture comprises a plurality of segment-swipes, wherein each transition between adjacent segment swipes forms an abrupt angle (Fig. 12C, 15A, [0128]. Rucine describes “…gesture 1210 detected as input as a vertical zig-zag line in a single, multi-directional stroke by a user swiping their finger or stylus tip up and down several times in the displayed position shown …”. The zig-zag lines form the abrupt angles between the segment-swipes shown in the cited Figures). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Rucine to mark the selection zone for editing or manipulating the display content. Re-claim 18, Moscovich-Rucine teaches the method of claim 17, but Moscovich fails to teach a method, wherein the abrupt angle is in a range of 0 degrees to 90 degrees, inclusive. However, Rucine teaches: wherein the abrupt angle is in a range of 0 degrees to 90 degrees, inclusive (Fig. 12C, 15A, [0128]. Rucine describes “…gesture 1210 detected as input as a vertical zig-zag line in a single, multi-directional stroke by a user swiping their finger or stylus tip up and down several times in the displayed position shown …”. The zig-zag lines form the abrupt angles in different degree that is just an input design choice and does not have any specific function associate with it). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Rucine to mark the selection zone for editing or manipulating the display content. Claims 22-23 are rejected under 35 U.S.C. 103 as being unpatentable over Moscovich in view of Christie, and further in view of Wardell et al. (“Wardell”, US PG-Pub. 2017/0308399 A1). Re-claim 22, Moscovich-Christie teaches the method of claim 20, but Moscovich fails to teach a method, wherein the action includes assigning a value to the identified on-display item. However, Christie teaches: wherein the action includes assigning a value to the identified on-display item ([0068]. Christie describes “…The initial parameter values may be based on set down, i.e., when the user sets their fingers on the touch screen, and the current values may be based on any point within a stroke occurring after set down …”). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Christie to provide the corresponding gesture action associated with selected displayed item. Modified Moscovich fails to teaches a valence. However, Wardell teaches a valence (Fig. 8M, [0135]. Wardell describes “…The pop up also includes a drop down menu which allows the user to select a valence which is attached to the attribute item to be created…”).. Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of modified Moscovich with a valence assignment teaching of Wardell to determine the item as positive, negative or neutral. Re-claim 23, Moscovich-Christie-Wardell teaches the method of claim 22, but Moscovich fails to teach a method, wherein the value has a state that is a function of directionality of the extension. However, Christie teaches: wherein the value has a state that is a function of directionality of the extension. ([0117]. Christie describes “… the set down of the fingers will associate or lock the fingers to a particular image object displayed on the touch screen. Typically, when at least one of the fingers is positioned over the image on the image object, the image object will be associated with or locked to the fingers. As a result, when the fingers are rotated, the rotate signal can be used to rotate the object in the direction of finger rotation (e.g., clockwise, counterclockwise) …”). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of Moscovich with the input touch gesture teaching of Christie to provide the corresponding gesture action associated with selected displayed item Modified Moscovich fails to teaches a valence. However, Wardell teaches a valence (Fig. 8M, [0135]. Wardell describes “…The pop up also includes a drop down menu which allows the user to select a valence which is attached to the attribute item to be created…”).. Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the quick and dynamic block-content selection teachings of modified Moscovich with a valence assignment teaching of Wardell to determine the item as positive, negative or neutral. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure including Hicks (US 20140173484 A1) about Block-Based content selecting technique for touchscreen UI. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TUAN S NGUYEN whose telephone number is (571)270-7612. The examiner can normally be reached Monday-Friday (9-5). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fred Ehichioya can be reached at 571-272-4034. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TUAN S NGUYEN/Primary Examiner, Art Unit 2179
Read full office action

Prosecution Timeline

Mar 06, 2024
Application Filed
Nov 16, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602157
SIMULATION DEVICE SUITABLE FOR USE IN AUGMENTED-REALITY OR VIRTUAL-REALITY ENVIRONMENTS
2y 5m to grant Granted Apr 14, 2026
Patent 12591354
MEASURING DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12574957
DISPLAY METHOD OF WIRELESS DEVICE FOR CONNECTION
2y 5m to grant Granted Mar 10, 2026
Patent 12566914
SYSTEM AND METHODS TO FACILITATE CONTENT GENERATION USING GENERATIVE ARTIFICIAL INTELLIGENCE MODELS
2y 5m to grant Granted Mar 03, 2026
Patent 12568165
NON-TERRESTRIAL NETWORK CONNECTION ICON
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
65%
Grant Probability
99%
With Interview (+38.4%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 318 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month