DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AFIA.
This action is in response to application 18/846,232 filed 9/11/2024.
Claims 1-17 and 20-22 presented for examination.
Examiner’s Note
Claim 15 recites the term “a display module”, which is interpreted under their broadest reasonable interpretation as structural components of a computer system. Although the claim recites the modules are “configured to” perform certain functions, this term connotes sufficient structure to a person of ordinary skill in the art, and therefore 35 U.S.C. §112(f) is not invoked. Claim 15 also recites “a processing module” and “a communication module”, these terms are generic placeholders that do not denote a definite structure; therefore, “processing module” and “communication module” are interpreted under 35 U.S.C. §112(f) (see below).
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim 15 is rejected under 35 U.S.C. 112(a) as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding Claim 15, the specification fails to provide adequate written description for the recited “processing module” and “communication module.” Although these elements are claimed as distinct components, the original-filed specification does not describe corresponding structures, subsystems, or algorithms for these modules, nor does it identify how such modules are implemented beyond reciting their intended functions. The disclosure describes high-level operations performed by a client device but does not reasonably convey to one of ordinary skill in the art that the inventors had possession of separately identifiable “processing” and “communication” modules as claimed. Accordingly, the specification lacks sufficient written description support for these limitations.
.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 15 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Each claim limitation “a processing module configured to …”, and “a communication module configured to …” invokes 35 U.S.C. 112(f). However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. As a result, one of ordinary skill in the art would not be able to reasonably determine the scope of the clamed “processing module” and “communication module,” because it is unclear what structure performs the recited functions.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Claim Interpretation under 35 U.S.C. § 112(f) or 35 USC§ 112 (pre-A/A), Sixth Paragraph
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. - An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
Claim 15 invoke 35 U.S.C. 112(f) or 35 U.S.C. 112 (pre-AIA ), sixth paragraph by using the language "configured to”. A review of the specification shows that the following appear to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or 35 U.S.C. 112 (pre-AIA ) limitation:
Claim 15: Limitation “a processing module” has been interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because it uses a generic placeholder “configured to” coupled with functional language “acquire” without reciting sufficient structure to achieve the function. Furthermore, the generic placeholder is not preceded by a structural modifier. Limitation “a communication module” has been interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because it uses a generic placeholder “configured to” coupled with functional language “send” without reciting sufficient structure to achieve the function. Furthermore, the generic placeholder is not preceded by a structural modifier.
Following the 3-prong analysis test (see MPEP 2181 (I)):
A) " a processing module” and “a communication module” are generic placeholders that do not necessarily possess a specific structural meaning - where “acquire” and “send” may reasonably be interpreted to cover 'software' generators and processors;
B) the generic place holders are modified by functional language as above, linked by the linking phrases "configured to"; and
C) the above generic placeholders are not further modified by sufficient structure, material or acts for performing the claimed function.
Since the claim limitation(s) invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, claim 15 has been interpreted to cover the corresponding structure described in the specification that achieves the claimed function, and equivalents thereof.
A review of the specification fails to find corresponding structure described in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation:
Paragraph 0046 discloses an electronic device for implementing interactions, the specification does not disclose corresponding structure, material, or acts that are clearly linked, such as the electronic device, to the recited “processing module” and “communication module,” nor does it disclose an algorithm for performing the claimed functions. For purposes of continued examination only, and without conceding that the requirements of 35 U.S.C. §112(f) have been satisfied, the Examiner provisionally interprets the recited “processing module” as corresponding to a generic processor, and the “communication module” as corresponding to generic communication circuitry or a network interface, under the broadest reasonable interpretation. This provisional interpretation is applied solely to facilitate evaluation of the prior art and does not cure the deficiencies identified under 35 U.S.C. §112(a), §112(b), or §112(f).
If applicant wishes to provide further explanation or dispute the examiner's interpretation of the corresponding structure, applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action.
If applicant does not intend to have the claim limitation(s) treated under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may amend the claim(s) so that it/they will clearly not invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, or present a sufficient showing that the claim recites/recite sufficient structure, material, or acts for performing the claimed function to preclude application of 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
For more information, see MPEP § 2173 et seq. and Supplementary Examination Guidelines for Determining Compliance With 35 U.S.C. 112 and for Treatment of Related Issues in Patent Applications, 76 FR 7162, 7167 (Feb. 9, 2011).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use,
on sale or otherwise available to the public before the effective filing date of the claimed
invention.
Claims 1 and 15-17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Alexander Kropivny, Pat No US 10,963,124 (hereafter Kropivny).
Regarding Claim 1, Kropivny discloses an interaction method, applied to a first client [ABSTRACT: Discloses plurality of client (first, second … client) computers each display common content on an associated display area.], comprising:
in response to a trigger operation for an identifier of a target resource on a resource panel [ABSTRACT, claim 1: Discloses generating messages representing user input received at one client computer. User input at the first client functions as the trigger operation selecting content to be shared; and FIG.11: Discloses “a resource panel” (elements 476-495 – the toolbar /control panel region). Across the top of FIG.11, you see a distinct UI region containing buttons such as Open, Save, ImageShow, LinkCreate, and Publish; Each button corresponds to an action on a selectable resource. These controls are grouped in a dedicated UI panel region, separate from the main content area. This region is a panel that displays identifiers (icons/text labels), allows a trigger operation (click), and causes a resource to be displayed or manipulated in the main page. Thus, toolbar region equates to “a resource panel”.], displaying the target resource on a live streaming page [ABSTRACT: Discloses display common content on an associated display area. The selected content is displayed at the initiating client; and FIG.11: Discloses “a live streaming page” (element 472 – a page where live content is being displayed and interacted with).];
in response to an input operation for the target resource, acquiring display information corresponding to the target resource [ABSTRACT, claim 1: Discloses messages… defining content to be shared with the plurality of client computers; and col.15 lines 1-7, FIG.11: Discloses that a click is “an input operation”. The generated messages correspond to display information defining how the content is rendered,]; and
sending the display information to a server [ABSTRACT, claim 1: Discloses causing the one client computer to transmit the generated messages to a server. The first client sends display information to the server.] to send the display information to a second client through the server [ABSTARCT: Discloses to elicit transmission of output messages from the server to each of the plurality of client computers (second client). The server forwards the display information to other connected clients.], so that the target resource is displayed on the live streaming page at the second client according to the display information [ABSTARCT, claim 1: Discloses displaying the shared content over the common content on the respective display areas. The second client renders the same target resource based on the received display information.].
Regarding Claim 15, Kropivny discloses a interaction apparatus [col.2 lines 15-17: Discloses a client computer.], applied to a first client [col.2 lines 15-17: Discloses displaying on a first client computer.], comprising:
a display module [FIG.1: Discloses displays (element 15, 17, 19).] configured to, in response to a trigger operation for an identifier of a target resource on a resource panel [ABSTRACT, claim 1: Discloses generating messages representing user input received at one client computer. User input at the first client functions as the trigger operation selecting content to be shared; and FIG.11: Discloses “a resource panel” (elements 476-495 – the toolbar /control panel region). Across the top of FIG.11, you see a distinct UI region containing buttons such as Open, Save, ImageShow, LinkCreate, and Publish; Each button corresponds to an action on a selectable resource. These controls are grouped in a dedicated UI panel region, separate from the main content area. This region is a panel that displays identifiers (icons/text labels), allows a trigger operation (click), and causes a resource to be displayed or manipulated in the main page. Thus, toolbar region equates to “a resource panel”.], display the target resource on a live streaming page [ABSTRACT: Discloses display common content on an associated display area. The selected content is displayed at the initiating client; and FIG.11: Discloses “a live streaming page” (element 472 – a page where live content is being displayed and interacted with).];
a processing module [FIG.2: Discloses a server processor circuit; and FIG.9: Discloses a client processor circuit.] configured to, in response to an input operation for the target resource, acquire display information corresponding to the target resource [ABSTRACT, claim 1: Discloses messages… defining content to be shared with the plurality of client computers; and col.15 lines 1-7, FIG.11: Discloses that a click is “an input operation”. The generated messages correspond to display information defining how the content is rendered,]; and
a communication module [col.2 lines 15-16: Discloses computer network in communication between client devices and server.] configured to send the display information to a server [ABSTRACT, claim 1: Discloses causing the one client computer to transmit the generated messages to a server. The first client sends display information to the server.] to send the display information to a second client through the server [ABSTARCT: Discloses to elicit transmission of output messages from the server to each of the plurality of client computers (second client). The server forwards the display information to other connected clients.], so that the target resource is displayed on the live streaming page at the second client according to the display information [ABSTARCT, claim 1: Discloses displaying the shared content over the common content on the respective display areas. The second client renders the same target resource based on the received display information.].
Regarding Claim 16, Kropivny discloses a non-transitory computer-readable storage medium, comprising: computer program instructions which, when executed by a processor of an electronic device, cause the electronic device to implement the interaction method [col.4 lines 58-61: Discloses a computer readable medium encoded with codes (computer program instructions) for directing a processor circuit (executing) displayed on a first client computer (an electronic device.], comprising:
in response to a trigger operation for an identifier of a target resource on a resource panel [ABSTRACT, claim 1: Discloses generating messages representing user input received at one client computer. User input at the first client functions as the trigger operation selecting content to be shared; and FIG.11: Discloses “a resource panel” (elements 476-495 – the toolbar /control panel region). Across the top of FIG.11, you see a distinct UI region containing buttons such as Open, Save, ImageShow, LinkCreate, and Publish; Each button corresponds to an action on a selectable resource. These controls are grouped in a dedicated UI panel region, separate from the main content area. This region is a panel that displays identifiers (icons/text labels), allows a trigger operation (click), and causes a resource to be displayed or manipulated in the main page. Thus, toolbar region equates to “a resource panel”.], displaying the target resource on a live streaming page [ABSTRACT: Discloses display common content on an associated display area. The selected content is displayed at the initiating client; and FIG.11: Discloses “a live streaming page” (element 472 – a page where live content is being displayed and interacted with).];
in response to an input operation for the target resource, acquiring display information corresponding to the target resource [ABSTRACT, claim 1: Discloses messages… defining content to be shared with the plurality of client computers; and col.15 lines 1-7, FIG.11: Discloses that a click is “an input operation”. The generated messages correspond to display information defining how the content is rendered,]; and
sending the display information to a server [ABSTRACT, claim 1: Discloses causing the one client computer to transmit the generated messages to a server. The first client sends display information to the server.] to send the display information to a second client through the server [ABSTARCT: Discloses to elicit transmission of output messages from the server to each of the plurality of client computers (second client). The server forwards the display information to other connected clients.], so that the target resource is displayed on the live streaming page at the second client according to the display information [ABSTARCT, claim 1: Discloses displaying the shared content over the common content on the respective display areas. The second client renders the same target resource based on the received display information.].
Regarding Claim 17, Kropivny discloses an electronic device [col.2 lines 15-17: Discloses a client computer.], comprising: a memory and a processor, the memory being configured to store computer program instructions [col.4 lines 58-61: Discloses a computer readable medium (memory) encoded with codes (computer program instructions) for directing a processor circuit (executing) displayed on a first client computer (an electronic device.]; and
the processor being configured to execute the computer program instructions to implement the interaction method [col.4 lines 58-61: Discloses a computer readable medium encoded with codes (computer program instructions) for directing a processor circuit (executing) displayed on a first client computer (an electronic device.] on a live streaming channel according to claim 1 [See claim 1 for rejections.].
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 2-7, 12 and 20-22 are rejected under 35 U.S.C. 103 as being unpatentable over Alexander Kropivny, Pat No US 10,963,124 (hereafter Kropivny) and further in view of Jobs et al., Pub No US 2008/0174570 (hereafter Jobs).
Regarding Claim 2, Kropivny discloses the interaction method according to claim 1, Kropivny does not explicitly discloses wherein the input operation comprises a touch operation, and the in response to an input operation for the target resource, acquiring display information corresponding to the target resource, comprises: in response to the touch operation for the target resource, acquiring movement information of one or more touch points corresponding to the touch operation; and acquiring the display information corresponding to the target resource based on the movement information of the one or more touch points. However, in analogous art, Jobs discloses detecting one or more finger contacts with the touch screen display… applying one or heuristics to the one or more finger contacts to determine a command [ABSTRACT, para(s).0009, 0011, 0015, claim 1]. A touch screen has a touch-sensitive surface, sensor or set of sensors that accepts input from the user based on haptic and/or tactile contact. The touch screen and the display controller detect contact (and any movement or breaking of the contact) on the touch screen and converts the detected contact into interaction with user-interface objects (e.g., one or more soft keys, icons, web pages or images) that are displayed on the touch screen [para.0104]. Determining movement of the point of contact may include determining speed (magnitude), velocity (magnitude and direction), and/or an acceleration (a change in magnitude and/or direction) of the point of contact (movement information) [para.0118]. Thus, Jobs explicitly discloses detecting touch operations and acquiring movement information of one or more touch points, which is used to generate commands corresponding to display information. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Kropivny with these limitations, as taught by Jobs in order to yield predictable result such as providing a synchronized multi-client display system where different touch gestures control different display dimensions of a shared resource. [Jobs: para.0828].
Regarding Claim 3, the combined teachings of Kropivny and Jobs discloses the interaction method according to claim 2, and Jobs further discloses wherein:
in response to a type of the touch operation being single-touch, the movement information of the one or more touch points comprises movement information of a single touch point [para.0118: Discloses determining movement of the contact and tracking the movement across the touch screen. Determining movement of the point of contact may include determining speed (magnitude), velocity (magnitude and direction), and/or an acceleration (a change in magnitude and/or direction) of the point of contact (movement information). These operations may be applied to a single touch contact (e.g., one finger contacts) or multiple simultaneous touch contacts (e.g., "multitouch"/multiple finger contacts), the contact/motion module and the display controller detects contact on a touchpad; and FIG.7: depicts a single touch point used to scroll a list of content, where movement of the single touch point provides movement information corresponding to a single-touch operation.]; and
the acquiring the display information corresponding to the target resource based on the movement information of the one or more touch points, comprises:
acquiring information of a display path corresponding to the target resource according to the movement information of the single touch point, the display information comprising the information of the display path [para.0118: Discloses movement information of the single touch point includes direction (path); and para.0420: Discloses the displayed portion of the image is translated in accordance with the direction of the drag or swipe gesture (e.g., vertical, horizontal, or diagonal translation); and para.0861: Discloses translating the page content has an associated direction of translation that corresponds to a direction of movement of the N-finger translation gesture. In some embodiments, the direction of translation corresponds directly to the direction of finger movement; in some embodiments, however, the direction of translation is mapped from the direction of finger movement in accordance with a rule; and FIG.7: Shows continuous vertical movement of displayed content in response to the path of the single touch point, which defines a display path based on the movement information.]. This claim is rejected on the same grounds as claim 2.
Regarding Claim 4, the combined teachings of Kropivny and Jobs discloses the interaction method according to claim 2, and Jobs further discloses wherein:
in response to a type of the touch operation being multi-touch, the movement information of the one or more touch points comprises movement information corresponding to each of a plurality of touch points [para.0439, FIG(s).19B: Illustrates multiple touch points simultaneously interacting with an image, with arrows indicating movement of each touch point, corresponding to movement information for a plurality of touch points]; and
the acquiring the display information corresponding to the target resource based on the movement information of the one or more touch points, comprises:
acquiring information of a display size and/or a display angle corresponding to the target resource according to the movement information corresponding to each of the plurality of the one or more touch points, the display information comprising the information of the display size and/or the display angle [para.0439, FIG(s).19B: Explicitly shows scaling and adjustment of an image in response to relative movement of multiple touch points, which corresponds to acquiring display size and/or display angle information.]. This claim is rejected on the same grounds as claim 2.
Regarding Claim 5, the combined teachings of Kropivny and Jobs discloses the interaction method according to claim 4, and Jobs further discloses wherein in response to the display information comprising the information of the display size, the acquiring information of a display size and/or a display angle corresponding to the target resource according to the movement information corresponding to each of the plurality of touch points, comprises:
determining distances between the plurality of touch points according to the movement information corresponding to each of the plurality of touch points [FIG.19B: Illustrates multiple touch points moving relative to one another during a pinch or spread gesture. Such gestures relay on determining relative distances between the plurality of touch points based on their movement.]; and
determining the information of the display size corresponding to the target resource according to the distances between the plurality of touch points [FIG.19B: Depicts scaling of image 1606 in response to relative movement of multiple touch points, indicating that the display size is determined based on the distance between the touch points.]. This claim is rejected on the same grounds as claim 4.
Regarding Claim 6, the combined teachings of Kropivny and Jobs discloses the interaction method according to claim 4, and Jobs further discloses wherein in response to the display information comprising the information of the display angle, the acquiring information of a display size and/or a display angle corresponding to the target resource according to the movement information corresponding to each of the plurality of touch points, comprises:
determining rotation angles of connection lines between the plurality of touch points according to the movement information corresponding to each of the plurality of touch points [FIG.19B: Illustrates multiple touch points moving in a rotational manner relative to one another during a rotate gesture. Such gestures require determining rotation angles of connection lines between the touch points based on their movement.]; and
acquiring the information of the display angle corresponding to the target resource according to the rotation angles of the connection lines between the plurality of touch points [FIG.19B: Depicts reorientation of image 1606 in response to relative angular movement of multiple touch points, indicating that a display angle is acquired according to the determined rotation angles.]. This claim is rejected on the same grounds as claim 4.
Regarding Claim 7, the combined teachings of Kropivny and Jobs discloses the interaction method according to claim 2, and Jobs further discloses wherein the movement information of the one or more touch points comprises position information and time information of each touch point [FIG.19B: Explicitly shows manipulation of an image 1606 on a touch-sensitive display, indicating that the target resource is an image and the display information includes image display information.]. This claim is rejected on the same grounds as claim 2.
Regarding Claim 12, Kropivny discloses the interaction method according to claim 1, Kropivny does not explicitly discloses wherein: the input operation comprises a touch operation; and different touch operation types are configured to control display modes of the target resource in different dimensions. However, in analogous art, Jobs discloses detecting one or more finger contacts with the touch screen display… applying one or heuristics to the one or more finger contacts to determine a command [ABSTRACT, para(s).0009, 0011, 0015, claim 1]. A touch screen has a touch-sensitive surface, sensor or set of sensors that accepts input from the user based on haptic and/or tactile contact. The touch screen and the display controller detect contact (and any movement or breaking of the contact) on the touch screen and converts the detected contact into interaction with user-interface objects (e.g., one or more soft keys, icons, web pages or images) that are displayed on the touch screen [para.0104]. Determining movement of the point of contact may include determining speed (magnitude), velocity (magnitude and direction), and/or an acceleration (a change in magnitude and/or direction) of the point of contact (movement information) [para.0118]. Thus, Jobs teaches different touch operation types (single-touch and multi-touch – para.0118) that control different display behaviors, such as one-dimensional scrolling [para.1217], two-dimensional translation [para.0010], scaling [FIG.19B], and rotation [FIG.19B], corresponding to controlling display modes in different dimensions. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Kropivny with these limitations, as taught by Jobs in order to yield predictable result such as providing a synchronized multi-client display system where different touch gestures control different display dimensions of a shared resource. [Jobs: para.0828].
Regarding Claim 20, Kropivny discloses the non-transitory computer-readable storage medium according to claim 16, Kropivny does not explicitly discloses wherein: the input operation comprises a touch operation; and the in response to an input operation for the target resource, acquiring display information corresponding to the target resource, comprises: in response to the touch operation for the target resource, acquiring movement information of one or more touch points corresponding to the touch operation; and acquiring the display information corresponding to the target resource based on the movement information of the one or more touch points. However, in analogous art, Jobs discloses detecting one or more finger contacts with the touch screen display… applying one or heuristics to the one or more finger contacts to determine a command [ABSTRACT, para(s).0009, 0011, 0015, claim 1]. A touch screen has a touch-sensitive surface, sensor or set of sensors that accepts input from the user based on haptic and/or tactile contact. The touch screen and the display controller detect contact (and any movement or breaking of the contact) on the touch screen and converts the detected contact into interaction with user-interface objects (e.g., one or more soft keys, icons, web pages or images) that are displayed on the touch screen [para.0104]. Determining movement of the point of contact may include determining speed (magnitude), velocity (magnitude and direction), and/or an acceleration (a change in magnitude and/or direction) of the point of contact (movement information) [para.0118]. Thus, Jobs explicitly discloses detecting touch operations and acquiring movement information of one or more touch points, which is used to generate commands corresponding to display information. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Kropivny with these limitations, as taught by Jobs in order to yield predictable result such as providing a synchronized multi-client display system where different touch gestures control different display dimensions of a shared resource. [Jobs: para.0828]. This claim is rejected on the same grounds as claim 20.
Regarding Claim 21, the combined teachings of Kropivny and Jobs discloses the non-transitory computer-readable storage medium according to claim 20, and Jobs further discloses wherein:
in response to a type of the touch operation being single-touch, the movement information of the one or more touch points comprises movement information of a single touch point [para.0118: Discloses determining movement of the contact and tracking the movement across the touch screen. Determining movement of the point of contact may include determining speed (magnitude), velocity (magnitude and direction), and/or an acceleration (a change in magnitude and/or direction) of the point of contact (movement information). These operations may be applied to a single touch contact (e.g., one finger contacts) or multiple simultaneous touch contacts (e.g., "multitouch"/multiple finger contacts), the contact/motion module and the display controller detects contact on a touchpad; and FIG.7: depicts a single touch point used to scroll a list of content, where movement of the single touch point provides movement information corresponding to a single-touch operation.]; and
the acquiring the display information corresponding to the target resource based on the movement information of the one or more touch points, comprises:
acquiring information of a display path corresponding to the target resource according to the movement information of the single touch point, the display information comprising the information of the display path [para.0118: Discloses movement information of the single touch point includes direction (path); and para.0420: Discloses the displayed portion of the image is translated in accordance with the direction of the drag or swipe gesture (e.g., vertical, horizontal, or diagonal translation); and para.0861: Discloses translating the page content has an associated direction of translation that corresponds to a direction of movement of the N-finger translation gesture. In some embodiments, the direction of translation corresponds directly to the direction of finger movement; in some embodiments, however, the direction of translation is mapped from the direction of finger movement in accordance with a rule; and FIG.7: Shows continuous vertical movement of displayed content in response to the path of the single touch point, which defines a display path based on the movement information.]. This claim is rejected on the same grounds as claim 20.
Regarding Claim 22, the combined teachings of Kropivny and Jobs discloses the non-transitory computer-readable storage medium according to claim 20, and Jobs further discloses wherein:
in response to a type of the touch operation being multi-touch, the movement information of the one or more touch points comprises movement information corresponding to each of a plurality of touch points [para.0439, FIG(s).19B: Illustrates multiple touch points simultaneously interacting with an image, with arrows indicating movement of each touch point, corresponding to movement information for a plurality of touch points]; and
the acquiring the display information corresponding to the target resource based on the movement information of the one or more touch points, comprises:
acquiring information of a display size and/or a display angle corresponding to the target resource according to the movement information corresponding to each of the plurality of the one or more touch points, the display information comprising the information of the display size and/or the display angle [para.0439, FIG(s).19B: Explicitly shows scaling and adjustment of an image in response to relative movement of multiple touch points, which corresponds to acquiring display size and/or display angle information.].. This claim is rejected on the same grounds as claim 20.
Claims 8 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Alexander Kropivny, Pat No US 10,963,124 (hereafter Kropivny) and further in view of Roman et al., Pub No US 2013/0332855 (hereafter Roman).
Regarding Claim 8, Kropivny discloses the interaction method according to claim 1, Kropivny does not explicitly discloses wherein the in response to an input operation for the target resource, acquiring display information corresponding to the target resource, comprises: in response to an input operation for a target display mode in a display setting panel corresponding to the target resource, acquiring display information corresponding to the target display mode as the display information corresponding to the target resource. However, in analogous art, Roman discloses the grid view… may include a setting icon for opening a settings menu [para.0387]. The device is displaying a settings menu… includes a photo stream menu item… for displaying a photo stream settings menu [para.0279]. Thus, explicitly discloses a display setting panel (settings menu) corresponding to a displayed resource, where user input selects a display-related option and the system acquires configuration information corresponding to the selected mode. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Kropivny with these limitations, as taught by Roman in order to yield predictable result of allowing display content and associated updates to be shared and presented across multiple client devices via a server-mediated stream, thereby enabling synchronized or near-real-time presentation of the target resource on multiple devices [Roman: ABSTARCT].
Regarding Claim 10, Kropivny discloses the interaction method according to claim 1, Kropivny does not explicitly discloses wherein the in response to an input operation for the target resource, acquiring display information corresponding to the target resource, comprises: receiving display information input by a user by means of a display setting option or editing control provided by a display setting panel corresponding to the target resource, as the display information corresponding to the target resource. However, in analogous art, Roman discloses the grid view… may include a setting icon for opening a settings menu [para.0387]. The device is displaying a settings menu… includes a photo stream menu item… for displaying a photo stream settings menu [para.0279]. Thus, Roman teaches receiving user input via selectable options within a settings menu, where the user-provided input controls how the target resource is displayed, corresponding to display imformation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Kropivny with these limitations, as taught by Roman in order to yield predictable result of allowing display content and associated updates to be shared and presented across multiple client devices via a server-mediated stream, thereby enabling synchronized or near-real-time presentation of the target resource on multiple devices [Roman: ABSTARCT].
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Alexander Kropivny, Pat No US 10,963,124 (hereafter Kropivny) and further in view of Roman et al., Pub No US 2013/0332855 (hereafter Roman) and further in view of Jobs et al., Pub No US 2008/0174570 (hereafter Jobs).
Regarding Claim 9, the combined teachings of Kropivny and Roman discloses the interaction method according to claim 8, the combination does not explicitly disclose wherein the target display mode comprises: one or more of a display path, a display size, or a display angle. However, in analogous art, Jobs discloses touch screen and the display controller detect contact (and any movement or breaking of the contact) on the touch screen and converts the detected contact into interaction with user-interface objects (e.g., one or more soft keys, icons, web pages or images) that are displayed on the touch screen [para.0104]. Determining movement of the point of contact may include determining speed (magnitude), velocity (magnitude and direction), and/or an acceleration (a change in magnitude and/or direction) of the point of contact (movement information) [para.0118]. Thus, Jobs teaches display paths (single-touch movement), display size (pinch gestures – FIG.19B) and display angle (rotation gestures), which constitute selectable display modes under broadest reasonable interpretation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Kropivny and Roman with these limitations, as taught by Jobs in order to yield predictable result such as providing a synchronized multi-client display system where different touch gestures control different display dimensions of a shared resource. [Jobs: para.0828].
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Alexander Kropivny, Pat No US 10,963,124 (hereafter Kropivny) and further in view of DeMarco et al., Pat No US 8,392,821 (hereafter DeMarco).
Regarding Claim 11, Kropivny discloses the interaction method according to claim 1, Kropivny does not explicitly discloses further comprising: in response to the display of the target resource being started on the live streaming page, starting timekeeping; and in response to a timekeeping duration being greater than a preset duration, controlling the display of the target resource from the live streaming page to end. However, in analogous art, DeMarco discloses playlist data that
includes… timing information indicating when to display the overlay during playing of the video (col.1 lines 34-37). Table 1 discloses On Event: video playing and video.time "near" timepoint or mouse enters timepoint {Show Content(type) // text, image, video, or audio}; and On Event: video .time "near" timepoint + timepoint.duration or mouse exits timepoint {Hide tag.content}. The tags and overlays may be deleted after the duration or end point is exceeded during playback (col.5 lines 58-60). Thus, DeMarco explicitly teaches starting timekeeping when the display of an overlay begins during video playback, using tracked video time, comparing elapsed time against a preset duration, and automatically ending the display by hiding or deleting the overlay once the duration is exceeded. This directly corresponds to starting timekeeping upon display initiation and ending the display when a preset duration is exceeded. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Kropivny with these limitations, as taught by DeMarco in order to yield predictable result of controlling the duration of presentation of the target resource on a live streaming page by tracking playback time and terminating or advancing the display once a preset duration associated with the media content or playlist is reached [DeMarco: col.1 lines 34-37].
Claims 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Alexander Kropivny, Pat No US 10,963,124 (hereafter Kropivny) and further in view of Gavrilescu et al., Pat No US 7,660,899 (hereafter Gavrilescu).
Regarding Claim 13, Kropivny discloses the interaction method according to claim 1, Kropivny does not explicitly discloses further comprising: in response to the trigger operation for the identifier of the target resource on the resource panel, displaying guidance information corresponding to the target resource. However, in analogous art, Gavrilescu discloses the synchronization message may indicate that the first user highlighted a portion of the web page (col.5 lines 48-50). Thus, Gavrilescu discloses displaying contextual visual guidance (highlighting) in response to user actions on a resource, which corresponds to displaying guidance information for the selected target resource. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Kropivny with these limitations, as taught by Gavrilescu in order to yield predictable result such as displaying contextual guidance information by highlighting a selected portion of a shared resource so that another client is guided to the same content [Gavrilescu: col.2 lines 54-58, col.2 lines 64-67, col.3 lines 1-3].
Regarding Claim 14, Kropivny discloses the interaction method according to claim 1, Kropivny does not explicitly discloses wherein the sending the display information to a server to send the display information to a second client through the server, so that the target resource is displayed on the live streaming page at the second client according to the display information, comprises: sending the display information to the server to send the display information to the second client through the server, so that the second client displays the target resource on the live streaming page according to the display information, in response to a first display mode being disabled and a second display mode being enabled. However, in analogous art, Gavrilescu discloses the synchronization message indicates commands reflecting the browsing performed by the first user, causing the second client to mirror the browsing performed by the first user [FIG.2, ABSTRACT, col.2 lines 54-58, col.2 lines 64-67, col.3 lines 1-3]. Once a second user accepts an invitation, a synchronization session is established between the first client and the second client [col.6 lines 51-53]. The synchronization session may be terminated when either user exits the co-browsing session [ABSTARCT]. Gavrilescu further teaches exchanging display configuration information indicating whether browser display features are enabled or disabled (e.g., images, cookies, or java) which corresponds to enabling and disabling display modes [col.7 lines 3-27]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Kropivny with these limitations, as taught by Gavrilescu in order to yield predictable result of synchronizing enabled and disabled display states so that the second client renders the target resource according to the enabled display mode [Gavrilescu: col.6 lines 51-53, col.7 lines 3-27].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Zhang, (US 11,558,645) – Discloses displaying a first live-streaming interface of the live-streaming room, wherein the first livestreaming interface includes live-streaming data provided by a second terminal, the first terminal includes a terminal in a management mode of a target account, and the second terminal includes a terminal in a live-streaming mode of the target account; detecting a first operation through the first live-streaming interface; and sending a first instruction to a live-streaming server in response to the first operation, wherein the first instruction carries the target account, and the live-streaming server is configured to send the first instruction to the second terminal, and the first instruction instructs the second terminal to perform a synchronization operation with the first terminal (col.1 lines 35-48).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADIL OCAK whose telephone number is (571) 272-2774. The examiner can normally be reached on M-F 8:00 AM - 5:00 PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nasser Goodarzi can be reached on 571-272-4195. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system; contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ADIL OCAK/Primary Examiner, Art Unit 2426