DETAILED ACTION
This office action is responsive to communication(s) filed on 1/5/2026.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Foreign Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Election/Restrictions
Applicant’s election without traverse of Group I (claims 1-9 and 13-15) in the reply filed on 1/5/2026 is acknowledged.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The following title is suggested: Cross-Window Drag-and-Drop Control By Operating System of an Electronic Device
Claims Status
Claims 1-7, 13-15 and 18-27 are pending and are currently being examined.
Claims 1, 13 and 27 are independent.
Claims 18-27 are newly added.
Claims 8-12 and 16-17 are newly canceled.
Claim Rejections - 35 USC § 112(b) or 112(2nd)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim(s) 25 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claim 25 recites the limitation “the first input control” in the phrase “the first input control is an edit control of the chat window”. There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-3, 5-6, 13-15, 18-19, 22-23, 25 and 27 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Arumugam; Suresh et al. (hereinafter Arumugam – US 20140006967 A1).
Independent Claim 1:
Arumugam teaches:
A control content drag method performed by a first electronic device, comprising: (fig. 4)
displaying a first window and a second window of at least one application running at an application layer of the first electronic device; (see fig. 5B and ¶ 69, displaying windows for application 505 and 530, respectively. The applications of a first electronic device [e.g., computing device 200], as shown in fig. 2 and ¶ 34. It was well within the capabilities of a person having ordinary skill in the art to have realized that these applications are running at an application layer of the first electronic device because every application that display windows must be operating within the application layer because that is the software boundary responsible for implementing the graphical user interface (GUI) logic, user interactions, and application-specific content that requests windowing services from the underlying operating system)
detecting, by a framework layer of an operating system of the first electronic device, a touch and hold operation on a first control in the first window; (A module within a first application receives a user interface event notification from the operating system's UI manager, which includes details about user input like touch and hold, ¶ 37. Based on this notification, the application can determine if a specific item has been dragged outside its window ¶¶ 23 and 37. The UI manager, functioning as a framework layer of the operating system, provides these notifications via API calls and manages, detects, and routes system-level events to applications using APIs and UI toolkits, ¶ 42. Here, it was well within the capabilities of a person having ordinary skill in the art to have realized that the UI manager 232 of the operating system is a framework layer of an operating system that manages UI inputs, including detecting…a touch and hold operation on a first control in the first window, because it provides a structural base, APIs, and libraries (like UI toolkits) to manage, detect, and route system-level events to applications)
and in response to a drag operation that uses the touch and hold operation as a drag start operation, transferring, by the framework layer, content of the first control from the first window to the second window, wherein the drag operation is used to drag the content of the first control from the first window to the second window. (an operating system 230 detects a drag-and-drop action of a UI object, where a "UI manager 232" determines if the released object falls within a second application's window, subsequently transferring the "data object" [content] from the first to the second window, ¶ 62 and fig. 4:435,440)
Claim 2:
The rejection of claim 1 is incorporated. Arumugam further teaches:
wherein the detecting the touch and hold operation and the transferring the content of the first control are performed by a drag unit belonging to the framework layer of the operating system. (the UI manager 232 of the operating system functions as a “drag unit”, that detects touch and hold operation and transfer[s] the content, as explained above for claim 1, ¶¶ 23, 37 and 62. Also see ¶¶ 61 and 63)
Claim 3:
The rejection of claim 1 is incorporated. Arumugam further teaches:
wherein the detecting the touch and hold operation on the first control in the first window comprises:
determining, based on a location of the touch and hold operation, that the touch and hold operation is the touch and hold operation on the first control in the first window. (the operating system outputs a thumbnail at the position of user’s input, and tracks the position of the user’s input with the thumbnail, as the user drags the object [necessarily determining a drag, which includes a touch and hold], ¶ 61, and can determine if the release input falls within the window of a particular application, and transfer the object to the corresponding application [touch and hold operation on the first control in the first window], ¶¶ 62 and 68 and fig. 5B)
Claim 5:
The rejection of claim 1 is incorporated. Arumugam further teaches:
wherein the transferring the content of the first control from the first window to the second window comprises:
detecting a drag end operation; (a release of a dragging/held touch gesture is detected, ¶¶ 23 and 69 and fig. 5C)
determining a first input control in the second window based on a location of the drag end operation; (a drop zone [first input control], albeit not visible, is determined to be at the right of the image 595, see thumbnail 535 and dropped memo 580, based as on the location of the release of thumbnail 535, see figs. 5C-5C, ¶¶ 69 and 70.)
and using the content of the first control as input content of first input control. (the dragged memo [first control] in dropped and inputted into the second window at the drop zone [first input control] to the right of image 595, ¶¶ 69-70 and fig. 5C)
Claim 6:
The rejection of claim 5 is incorporated. Arumugam further teaches:
wherein the determining the first input control in the second window based on a location of the drag end operation comprises:
determining, by the framework layer based on the location of the drag end operation, that a window on which the drag end operation is performed is the second window; (determining a releasing of the object within the window of the second application, ¶ 23, 62 and 69, and figs 4:435 and fig. 5C)
and using, by the framework layer, a control in an input state in the second window as the first input control. (Arumugam’s drop zone/locations, as shown in figs. 5C-5D, see ¶¶ 69-70, are considered controls in an “input state” because drop zones are interactive, designated UI areas that listen for drag-and-drop events to accept, process, and update data—such as file uploads or item reorganization—when a user drops an item onto them)
Independent Claims 13 and 27:
Claim(s) 13 and 27 are directed to an electronic device and computer-readable storage medium for accomplishing the steps of the method in claim 1, and are rejected using similar rationale(s).
Claim 14:
The rejection of claim 13 is incorporated. Claim(s) 14 is directed to an electronic device for accomplishing the steps of the method in claim 2, and is rejected using similar rationale(s).
Claim 15:
The rejection of claim 13 is incorporated. Claim(s) 15 is directed to an electronic device for accomplishing the steps of the method in claim 3, and is rejected using similar rationale(s).
Claim 18:
The rejection of claim 5 is incorporated. Arumugam further teaches:
wherein the first input control is an edit control. (The information moved to the drop zone is editable at the drop zone, and is input into an application that allows editing, e.g., adding items to be shared, see ¶¶ 69-70 and figs. 5C-5D, so the drop zone is considered an edit control)
Claim 19:
The rejection of claim 18 is incorporated. Arumugam further teaches:
wherein the second window is a chat window, and the first input control is an edit control of the chat window. (the second application corresponding to the second window can be a messaging application [implying a chat window], ¶ 20. It was well within the capabilities of a person having ordinary skill in the art to have realized that a messaging app would function similarly to, or as the sharing application 530, ¶ 68 and fig. 5D)
Claim 22:
The rejection of claim 13 is incorporated. Claim(s) 22 is directed to an electronic device for accomplishing the steps of the method in claim 5, and is rejected using similar rationale(s).
Claim 23:
The rejection of claim 22 is incorporated. Claim(s) 23 is directed to an electronic device for accomplishing the steps of the method in claim 6, and is rejected using similar rationale(s).
Claim 25:
The rejection of claim 13 is incorporated. Claim(s) 25 is directed to an electronic device for accomplishing the steps of the method in claim 19, and is rejected using similar rationale(s).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 4, 20-21 and 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Arumugam (US 20140006967 A1), as applied to claims 1 and 13 above, and further in view of JWA; Changhyup et al. (hereinafter JWA – US 20160048285 A1 ).
Claim 4:
The rejection of claim 1 is incorporated. Arumugam further teaches that data object information is obtained by the operating system after a touch input is detected by the operating system, ¶¶ 40, 59-60, and that such information include “any set of data that describes the particular UI object”, ¶ 25, e.g., e.g., its object type [control type], ¶ 63.
Arumugam does not appear to expressly teach, but JWA teaches:
wherein after the detecting the touch and hold operation on the first control in the first window, and before the transferring content of the first control from the first window to the second window, the method further comprises: determining, by the framework layer based on a control type of the first control, that the first control supports a drag function (identifying objects as being draggable or non-draggable based on the type of content, and based on a classification that occurs before or after the object is displayed, ¶ 68).
Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method of Arumugam to include wherein after the detecting the touch and hold operation on the first control in the first window, and before the transferring content of the first control from the first window to the second window, the method further comprises: determining, by the framework layer based on a control type of the first control, that the first control supports a drag function, as taught by JWA.
One would have been motivated to make such a combination in order to improve efficiency of the method, e.g., by identifying the interaction capabilities of the GUI objects, JWA ¶ 61. It was well within the capabilities of a person having ordinary skill in the art to have realized that such identification of interaction capabilities for the GUI objects can provide user-initiated interactive modifications of content while preventing errors and/or unnecessary user interactions, e.g., by providing clear, actionable feedback on which objects can be moved and/or which cannot.
Claim 20:
The rejection of claim 4 is incorporated. Arumugam further teaches:
wherein the first control is an image control. (the object can be an image, ¶ 21, e.g., in an image viewing application, ¶ 22)
Claim 21:
The rejection of claim 13 is incorporated. Claim(s) 21 is directed to an electronic device for accomplishing the steps of the method in claim 4, and is rejected using similar rationale(s).
Claim 26:
The rejection of claim 21 is incorporated. Claim(s) 26 is directed to an electronic device for accomplishing the steps of the method in claim 20, and is rejected using similar rationale(s).
Claim(s) 7 and 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Arumugam (US 20140006967 A1), as applied to claims 5 and 22 above, and further in view of Chaudhri; Imran A. et al. (hereinafter Chaudhri – US 20170357320 A1).
Claim 7:
The rejection of claim 5 is incorporated. Arumugam further teaches:
wherein the determining the first input control in the second window based on a location of the drag end operation comprises:
determining, based on the location of the drag end operation, that a window on which the drag end operation is performed is the second window; (determining a releasing of the object within the window of the second application, ¶ 23, 62 and 69, and figs 4:435 and fig. 5C)
Arumugam does not appear to expressly teach, but Chaudhri teaches:
and using an input control that is closest to the location of the drag end operation in the second window as the first input control (The device displays a user interface where an object automatically aligns with and settles into predefined, specific locations (snap positions) upon being released within a threshold range of those spots, such as in calendar, list, or icon grid layouts, ¶ 856. the object "settles into" or snaps to specific, pre-defined locations only when released within a "threshold range" of them, so the system inherently forces the dropped object to align with the nearest valid snap position relative to the end of the drag operation, ).
Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method of Arumugam to include and using an input control that is closest to the location of the drag end operation in the second window as the first input control, as taught by Chaudhri.
One would have been motivated to make such a combination in order to improve the usability of the method automatically snapping objects into designated areas when dropped near those areas, as seen in examples like calendar entries settling between date lines, list items aligning in slots, or icons fitting into a grid on a home screen, without requiring release at those precise locations, Chaudhri ¶ 856.
Claim 24:
The rejection of claim 22 is incorporated. Claim(s) 24 is directed to an electronic device for accomplishing the steps of the method in claim 7, and is rejected using similar rationale(s).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Below is a list of these references, including why they are pertinent:
Nilo; Bruce D. et al. US 20180335914 A1, is pertinent to claims 1, 13 and 27 for disclosing a drag and drop manager for transferring information from one application to another using touchscreen device, ¶¶ 2 and 53
Carullo; Vittorio et al. US 20100031170 A1, is pertinent to claims 1, 13 and 27 for disclosing defines object metadata types as being "draggable" if the corresponding metadata value can be exchanged between objects using a drag and drop facility, ¶ 24.
Commarford; Patrick M. et al. US 20120260203 A1, is pertinent to claims 1, 7, 13, 24, and 27 for disclosing displaying drop zones for drag and drop operations, ¶ 5, e.g., an icon an its surround area [closest to the location of the drag end], ¶ 22, files are transferred/moved to a location associated with the drop zone based on releasing/dropping the input on the drop zone, ¶¶ 16 and 24.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GABRIEL S MERCADO whose telephone number is (408)918-7537. The examiner can normally be reached Mon-Fri 8am-5pm (Eastern Time).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached at (571) 272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Gabriel Mercado/Primary Examiner, Art Unit 2171