DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is responsive to the amendment to U.S. Patent application 17/464,787 filed on 26 February 2026. Claims 1, 3-7, 10, and 11 are pending in the case. Claims 1, 10, and 11 are amended.
Prior Art
Listed herein below are the prior art references relied upon in this Office Action:
Fard et al. (US Patent Application Publication US 20160018955 A1), referred to as Fard herein.
Mabey et al. (US Patent Application Publication US 20190026063 A1), referred to as Mabey herein.
Blinnikka et a. (US 2010/0146434 A1) hereinafter known as Blinnika
Response to Arguments/Remarks
Applicant’s prior art arguments have been fully considered but they are not persuasive.
Applicant argues (pg. 6) that Mabey does not generate a “whole image” by combining (i) a first partial image acquired by a first user (having the latest state of a first instruction image operated by the first user) with (ii) a second partial image acquired by a second user (having the latest state of a second instruction image operated by the second user).
Examiner respectfully disagrees. First, Fard teaches displaying a whole image that shows an entirety of the specific work area and generating the whole image by combining partial images. Mabey provides the additional functionality of forming a multi-device workspace with a virtual canvas shared by a plurality of users. Accordingly, the combination of Fard and Mabey teaches the foregoing functionality.
The foregoing applies to all independent claims and their dependent claims.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-7, 10, and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Fard in view of Mabey in view of Blinnikka.
Regarding independent claim 1, Fard discloses “An information processing apparatus comprising: a processor configured to:
maintain a work area in which an instruction image that receives an instruction from a user is placed, the work area being an area larger than an area to be displayed on an operation screen for receiving an operation performed by the user (Fard, at ¶ [0034], presenting a user interface that includes one or more spaces containing program windows, a space, which is equivalent to a work area displayed in an operation screen, is a grouping of one or more applications, or windows, in relation to other applications or windows, such that the program(s)/applications of a single space is visible when the space is active, and so that a view can be generated of all spaces and their contents, the spaces in the matrix show portions of a larger desktop which may be zoomed, for instance, to show more detail as described at ¶ [0047] and as depicted at Fig. 3. Within a given space, a program window can be designated as an active program window, the term “active program window” refers to a program window which is currently designated as the primary recipient of user input for input devices such as a keyboard, as described at ¶ [0052].); and
when a specific work area is designated, display a whole image that shows an entirety of the specific work area on the operation screen (id. at ¶ [0047], the spaces in the matrix show portions of a larger desktop which may be zoomed, for instance, to show more detail. In zoom mode, a single space can be active and presented with a larger size, with the other spaces not being visible, whereas the view mode enables users navigate between spaces as described at ¶ [0054].), wherein
the processor is configured to:
generate the whole image by combining partial images each showing an area of the work area to be displayed on one screen of the operation screen (id. at ¶ [0077], by invoking Expose component, the user can use a hot key or key combination to automatically rearrange, scale, or resize program windows to increase usability as depicted at Fig. 10A, combining application windows each showing an area of the work area to be displayed on one screen, so that rearrange the user's program windows to maximize view-ability in all spaces at the same time, no windows are overlapping.);
...,
...
...
However, Fard does not explicitly teach
and utilize the partial image acquired by another user when the work area is shared by a plurality of users such that the whole image is generated based on the partial image acquired by the user and the partial image acquired by the other user ... . (Mabey, at ¶ [0030]) that a multi-device workspace may be formed with only a single interactive device, although each workspace device has a separate display area, the display area of each workspace device becomes part of a larger multi-device workspace that is linked to the display areas of the other workspace devices when a single user logs into all the workspace devices (id. at ¶ [0091]), the devices (Device 1, … Device N) may be private devices owned by a user (e.g., a smart phone that belongs to a user), public devices provided in, for example, offices, schools, or any other public place where multiple users may have access to the devices (id. at ¶ [0092]), accordingly, Figs. 32A and 32B displays a work area or virtual canvas is shared by a plurality of users (as depicted, 3 users), a user selects individual portions of the work area, selects a target device for each portion of content, command messages are subsequently generated which specify the selected content and target devices, each device’s viewport information can be independently set to different areas of the virtual canvas, where users of all three devices are able to concurrently work in the same virtual canvas but with different viewport information, if the user at device 1 sends an appropriate command, the displayed information on device 1 will be displayed on one or more of the other devices (id. at ¶ [0202]-[0203]).
...
the whole image includes a plurality of the instruction images, and the whole image displayed to the user is generated based on (i) a latest state of a first instruction image operated by the user and (ii) a latest state of a second instruction image operated by the other user. (Mabey, at ¶ [0030]) that the whole image or the virtual canvas depicted at Fig. 32A, a user selects individual portions of the image for target devices (id. at ¶ [0203]), the last state of the virtual canvas represents the last display state of the content objects displayed in the multi-device workspace. In other words, the last state of the virtual canvas represents the display state of data (content objects) as seen by the multi-device workspace across the plurality of workspace devices, immediately before a user logs outs of one or more of the devices that make up the multi-device workspace (id. at ¶ [0102])
Mabey is in the same field of interaction of multiple interactive devices in a multi-device workspace. Accordingly, it would have been obvious to one of ordinary skill in the art at the filing date of this application to modify Fard’s apparatus with utilize the partial image acquired by another user when the work area is shared by a plurality of users such that the whole image is generated based on the partial image acquired by the user and the partial image acquired by the other user and the whole image displayed to the user is generated based on (i) a latest state of a first instruction image operated by the user and (ii) a latest state of a second instruction image operated by the other user as taught by Mabey because it allows users to see and collaborate on e-presentations that may be updated by a remotely connected device in real-time and the workspace management system is able to capture the last state of the multi-device workspace on which the user was working (Mabey, at ¶ [0082] and [0102]).
wherein the whole image is displayed superimposed on one partial image. (Blinnika: Fig. 1; Blinnika teaches a minimap of the content which is superimposed on the partial image of the content.)
Blinnika is analogous to the present invention, since it is reasonably pertinent to the problem faced by the inventor, i.e. displaying a partial view and a whole view simultaneously. It would have been obvious, before the effective filing date of the claimed invention, to modify Fard’s apparatus with utilize the partial image acquired by another user when the work area is shared by a plurality of users wherein the whole image is generated based on the partial image acquired by the user and the partial image acquired by the other user with further superimposing the whole image on the partial image as taught by Blinnika because it allows users to navigate to different portions of the interface, as suggested by Blinnika: ¶[0020].
Independent claim 10 is directed towards a non-transitory computer readable medium equivalent to an information processing apparatus found in claim 1, and is therefore similarly rejected.
Independent claim 11 is directed towards an information processing method equivalent to an information processing apparatus found in claim 1, and is therefore similarly rejected.
Regarding claim 3, Fard in view of Mabey in view of Blinnika teaches all the limitations of independent claim 1 and its dependent claim 2. Fard further teaches “wherein the processor is configured to: newly generate the whole image when the partial image is acquired (Fard, at ¶ [0069], FIG. 7 depicts a screenshot of a computer display in space edit mode showing controls for adding or removing spaces. Icons and placeholder spaces are used to control and indicate adding or removing rows or columns in the group of spaces depicted in the example of FIG. 3. The user may add or remove columns or rows of spaces, for example by clicking on an icon or using a hotkey.).”
Regarding claim 4, Fard in view of Mabey in view of Blinnika teaches all the limitations of independent claim 1 and its dependent claim 2. Fard further teaches “wherein the processor is configured to: acquire the partial image when the operation performed by a user satisfies a predetermined condition (Fard, at ¶ [0083] and at Fig. 11, discloses the use of an icon which represents the contents of the respective spaces, the icon 1102 has four sectors for the spaces 304-310, and the user can initiate a move to a new space, optionally involving the animation technique, by clicking on the icon corresponding to the desired space.).”
Regarding claim 5, Fard in view of Mabey in view of Blinnika teaches all the limitations of independent claim 1 and its dependent claims 2 and 4. Fard further teaches “wherein the processor is configured to: acquire the partial image when the operation performed by a user is an operation of changing a display mode of the instruction image (Examiner notes that the instruction image shows reduced original image or page of the document as described at page 10 of original specification. Further, changes in the display mode of the instruction image could be any edit operation of the instruction image according to page 14 of original specification. Fard, at ¶ [0047], application windows are depicted in reduced size, the application windows can be resized or moved around in spaces to accommodate the user, and only the application window(s) in a particular space will be visible when that space is active. In some instances, programs or windows from another space can be visible; for instance, a small representation of another space can be visible on the display to achieve a “picture-in-picture” effect as described at ¶ [0051].).”
Regarding claim 6, Fard in view of Mabey in view of Blinnika teaches all the limitations of independent claim 1 and its dependent claims 2 and 4. Fard further teaches “wherein the processor is configured to: acquire the partial image when the operation performed by a user is an operation of stopping a display of the work area being displayed on the operation screen (Fard, at ¶¶ [0061]-[0062], discloses users can switch between the modes to access various types of associated functionality, the modes includes a view mode and a zoom-in mode.).”
Regarding claim 7, Fard in view of Mabey in view of Blinnika teaches all the limitations of independent claim 1 and its dependent claim 2. Fard further teaches “wherein the processor is configured to: generate the partial image from partial image generation information that distinguishes each of the partial images (Fard, at ¶ [0041], A space identification engine handle identification of the application program(s) associated with each space.).”
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEX OLSHANNIKOV whose telephone number is (571)270-0667. The examiner can normally be reached M-F 9:30-6.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Scott Baderman can be reached at 571-272-3644. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALEKSEY OLSHANNIKOV/Primary Examiner, Art Unit 2118