Detail Office Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Status: Please all the replies and correspondence should be addressed to Examiner’s art unit 2629. Receipt is acknowledged of papers submitted on 05-23-2024 under new application; which have been placed of record in the file. Claims 1-10 are pending.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. . 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c).
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 06-07-2024 and 08-21-2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over WU JINGYI (CN108553894 (A) IDS) hereinafter referenced as WU. in view of GONG XIANG (CN107450915 (A) IDS) hereinafter referenced as GONG and GUO JIANLIANG (CN108008991 (A) IDS) hereinafter referenced as GUO. (Further, foreign document with English translation are provided in combination. Please Note that paragraph references refer to English translation and figure references refer to figures provided in foreign document).
Regarding Claim 1, WU discloses a map display method (please see abstract, para. 19) comprising: determining a touch coordinate of a touch point of a touch action on a target map in a case that the touch action on the target map displayed on a touch display screen is detected (figs. 2-3, paras. 51, 53, 55, 61 disclosing a display control method, comprising: step S210, providing a map region on an interactive interface; step S220, when a first touch-control event acting on the map region is detected, according to a touch-control point for the first touch-control event, determining from the map region a coordinate point corresponding to the touch-control point; and step S230, at a preset position on the interactive interface, enlarging a preset region which includes the coordinate point in the map region and displaying the enlarged region, so as to obtain a partial enlarged map, wherein the preset region may be understood as a local range comprising the coordinate point in the map region, and the preset region may be, for example, a circular region, a square region or a region of any shape in the map region that is centered on); determining a rectangular area centered on the touch coordinate according to the touch coordinate (para. 61 disclosing a circular region, a square region or a region of any shape (rectangular) in the map region that is centered on) and displaying an image comprised in the rectangular area in a target display area on the touch display screen (figs. 2-3, paras. 51, 53, 55, 61 disclosing a display control method, comprising: step S210, providing a map region on an interactive interface; step S220, when a first touch-control event acting on the map region is detected, according to a touch-control point for the first touch-control event, determining from the map region a coordinate point corresponding to the touch-control point; and step S230, at a preset position on the interactive interface, enlarging a preset region which includes the coordinate point in the map region and displaying the enlarged region, so as to obtain a partial enlarged map, wherein the preset region may be understood as a local range comprising the coordinate point in the map region, and the preset region may be, for example, a circular region, a square region or a region of any shape in the map region that is centered on).
However, WU fails to discloses a first preset length , and a second preset length, wherein the first preset length is a preset length, and the second preset length is a preset height.
However, prior art of GONG discloses a first preset length , and a second preset length, wherein the first preset length is a preset length, and the second preset length is a preset height (please see paras. 6-18 disclosing a method for quickly capturing a multi-screen long image, the method comprising: determining the width (gWidth) and the height (total height) of a multi-screen long image that needs to be captured, creating a bitmap with the size of gWidth*total height, and then creating, by means of the bitmap, a canvas object associated with the bitmap; and then according to the attribute specification of a view, sequentially drawing sub-views in the created canvas object, and finally storing a drawn bitmap object).
WU teaches a map display method, comprising: determining a touch coordinate of a touch point of a touch action on a target map in a case that the touch action on the target map displayed on a touch display screen is detected; determining a rectangular area centered on the touch coordinate according to the touch coordinate.
WU teaches displaying an image comprised in the preset region may be, for example, a circular region, a square region or a region of any shape in the map region that is centered on (rectangular) area in a target display area on the touch display screen.
GONG teaches a first preset length , and a second preset length, wherein the first preset length is a preset length, and the second preset length is a preset height.
WU does not teach a first preset length , and a second preset length, wherein the first preset length is a preset length, and the second preset length is a preset height.
Hence the prior art includes each element claimed, although not necessarily in a single prior art reference, with the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference.
In combination, WU performs the same function as it does separately of
managing process operating, Map displaying in the preset region may be, for example, a circular region, a square region or a region of any shape in the map region that is centered on (rectangular) area in a target display area on the touch display screen.
GONG performs the same function as it does separately of obtaining the width of the current list class view, and storing the width in a gWidth variable; obtaining the number of all sub-views and processing each sub-view; obtaining the attribute specification of each sub-view; obtaining the dimension of each sub-view serving as the view object after the dimension specification of each sub-view is determined; and through the above mode, determining the width gWidth and the height total height of the multi-screen long graph needing to be shot, and creating a bitmap object and a canvas object corresponding to the bitmap object; sequentially drawing cached pixel frames of the sub-views into a canvas, generating a screenshot image, and outputting the result to a display screen.
Therefore one of ordinary skill in the art could have combined the elements as claimed by known methods, and that in combination, each element merely performs the same function as it does separately.
The results of the combination would have been predictable and resulted in modifying the invention of WU to include a first preset length , and a second preset length, wherein the first preset length is a preset length, and the second preset length is a preset height, as disclosed by GONG thereby the dimension of the complete long graph can be calculated once, the multi-screen long graph can be shot, convenience is brought to the user, and processing is fast for capturing picture on screen, specifically to a fast frequency sweeping as GONG discusses at abstract, paras. 6, 8
Therefore, the claimed subject matter would have been obvious to a person having ordinary skill in the art at the time the invention was made.
Further Regarding Claim 1, However, WU fails to recite a rectangular area centered on the touch coordinate according to the touch coordinate.
However, prior art of GUO discloses a rectangular area centered on the touch coordinate according to the touch coordinate (please see para. 79 discloses after loading the transparent view, users can determine the cropping size and cropping coordinates through touch operations; that is, after loading the transparent view, users can determine the cropping area through touch operations on the transparent view; when the transparent view receives the cropping area determined by the user's touch operation, it can obtain the cropping size and cropping coordinates of the cropping area through relevant technologies. Figure 3 is a schematic diagram of the interactive interface for determining the cropping area according to an embodiment of the present invention. As shown in Figure 3, assuming the cropping area is a rectangle, the user can directly determine the diagonal of the cropping area by two-point touch (this obviously discloses preset length or width and height of the rectangle being formed); or, the user can determine the diagonal of the cropping area by two clicks respectively).
WU teaches a map display method, comprising: determining a touch coordinate of a touch point of a touch action on a target map in a case that the touch action on the target map displayed on a touch display screen is detected; determining a rectangular area centered on the touch coordinate according to the touch coordinate).
WU teaches displaying an image comprised in the preset region may be, for example, a circular region, a square region or a region of any shape in the map region that is centered on (rectangular) area in a target display area on the touch display screen.
GUO recites a rectangular area centered on the touch coordinate according to the touch coordinate
WU does not recite a rectangular area centered on the touch coordinate according to the touch coordinate.
Hence the prior art includes each element claimed, although not necessarily in a single prior art reference, with the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference.
In combination, WU performs the same function as it does separately of managing process operating, Map displaying in the preset region may be, for example, a circular region, a square region or a region of any shape in the map region that is centered on (rectangular) area in a target display area on the touch display screen.
GUO performs the same function as it does separately of users can determine the cropping area through touch operations on the transparent view; when the transparent view receives the cropping area determined by the user's touch operation, it can obtain the cropping size and cropping coordinates of the cropping area through relevant technologies. Figure 3 is a schematic diagram of the interactive interface for determining the cropping area according to an embodiment of the present invention. As shown in Figure 3, assuming the cropping area is a rectangle, the user can directly determine the diagonal of the cropping area by two-point touch.
Therefore one of ordinary skill in the art could have combined the elements as claimed by known methods, and that in combination, each element merely performs the same function as it does separately.
The results of the combination would have been predictable and resulted in modifying the invention of WU to include a rectangular area centered on the touch coordinate according to the touch coordinate as disclosed by GUO thereby achieve a better user experience. Users of instant messaging applications will share their chat history with friends by taking screenshots as GUO discusses at abstract, para. 4.
Therefore, the claimed subject matter would have been obvious to a person having ordinary skill in the art at the time the invention was made.
Regarding Claim 2,WU discloses wherein the touch action is a tapping operation or a dragging operation (para. 65. Disclosing The second touch event can be a swipe or a drag operation).
Regarding Claim 3, WU discloses before the step of displaying the image comprised in the rectangular area in the target display area on the touch display screen, the map display method comprises: determining a coordinate of a first pixel point comprised in the rectangular area; and determining the image comprised in the rectangular area according to the coordinate of the first pixel point (figs. 2-3, paras. 51, 53, 55, 61, 66, 68 disclosing a display control method, comprising: step S210, providing a map region on an interactive interface; step S220, when a first touch-control event acting on the map region is detected, according to a touch-control point for the first touch-control event, determining from the map region a coordinate point corresponding to the touch-control point; and step S230, at a preset position on the interactive interface, enlarging a preset region which includes the coordinate point in the map region and displaying the enlarged region, so as to obtain a partial enlarged map, wherein the preset region may be understood as a local range comprising the coordinate point in the map region, and the preset region may be, for example, a circular region, a square region or a region of any shape in the map region that is centered on, para. 66 disclosing target touch points centered in a rectangular area (per para. 61) is an image area para. 68 disclosing image area is part of the scene, obviously discloses the touch point being a pixels point),
The prior art of GONG discloses the first preset length, and the second preset length ((please see paras. 6-18 disclosing a method for quickly capturing a multi-screen long image, the method comprising: determining the width (gWidth) and the height (total height) of a multi-screen long image that needs to be captured, creating a bitmap with the size of gWidth*total height, and then creating, by means of the bitmap, a canvas object associated with the bitmap; and then according to the attribute specification of a view, sequentially drawing sub-views in the created canvas object, and finally storing a drawn bitmap object).
prior art of GUO discloses a rectangular area centered on the touch coordinate according to the touch coordinate (please see para. 79 discloses after loading the transparent view, users can determine the cropping size and cropping coordinates through touch operations; that is, after loading the transparent view, users can determine the cropping area through touch operations on the transparent view; when the transparent view receives the cropping area determined by the user's touch operation, it can obtain the cropping size and cropping coordinates of the cropping area through relevant technologies. Figure 3 is a schematic diagram of the interactive interface for determining the cropping area according to an embodiment of the present invention. As shown in Figure 3, assuming the cropping area is a rectangle, the user can directly determine the diagonal of the cropping area by two-point touch (this obviously discloses preset length or width and height of the rectangle being formed); or, the user can determine the diagonal of the cropping area by two clicks respectively).
Regarding Claim 4, GONG discloses before the step of determining the image comprised in the rectangular area according to the coordinate of the first pixel point, a value of the first preset length, and a value of the second preset length, the map display method further comprises: saving an image presently displayed on the touch display screen in a first view; creating a first bitmap according to a length value and a height value of the first view; and saving pixel information of the first view into the first bitmap; and the step of determining the image comprised in the rectangular area according to the coordinate of the first pixel point, the first preset length, and the second preset length comprises: cropping the first bitmap according to the coordinate of the first pixel point, the first preset length, and the second preset length, to obtain the image comprised in the rectangular area (please see paras. 6-18 disclosing a method for quickly capturing a multi-screen long image, the method comprising: determining the width (gWidth) and the height (total height) of a multi-screen long image that needs to be captured, creating a bitmap with the size of gWidth*total height, and then creating, by means of the bitmap, a canvas object associated with the bitmap; and then according to the attribute specification of a view, sequentially drawing sub-views in the created canvas object, and finally storing a drawn bitmap object, Further GONG at abstract, paras. 6-13, disclosing step one, obtaining the width of the current list view, and is stored in the gWidth variable, step two, obtaining all sub-view number, each sub-view starting processing, step three, obtaining a sub-view of the attribute specification, step four, the size of sub-view is determined after obtaining the size of sub-view as the view object, step five, By the way, determining the intercepting screen the long width gWidth and height total height, creating a block bitmap object; and creating the bitmap object corresponding to the canvas object; and the sub-view pixel frame buffer are drawn in the canvas).
prior art of GUO discloses a rectangular area centered on the touch coordinate according to the touch coordinate (please see para. 79 discloses after loading the transparent view, users can determine the cropping size and cropping coordinates through touch operations; that is, after loading the transparent view, users can determine the cropping area through touch operations on the transparent view; when the transparent view receives the cropping area determined by the user's touch operation, it can obtain the cropping size and cropping coordinates of the cropping area through relevant technologies. Figure 3 is a schematic diagram of the interactive interface for determining the cropping area according to an embodiment of the present invention. As shown in Figure 3, assuming the cropping area is a rectangle, the user can directly determine the diagonal of the cropping area by two-point touch (this obviously discloses preset length or width and height of the rectangle being formed); or, the user can determine the diagonal of the cropping area by two clicks respectively, thus discloses a method for acquiring a target bitmap, which method is performed at least by means of a canvas and comprises: capturing the current display page to generate a source bitmap, and cropping the current display page to obtain a target bitmap).
Regarding Claim 5, GONG discloses the step of saving the pixel information of the first view into the first bitmap comprises: generating a canvas through the first bitmap; and drawing image information in the first view into the first bitmap through the canvas (please see abstract, paras. 6-13, disclosing step one, obtaining the width of the current list view, and is stored in the gWidth variable, step two, obtaining all sub-view number, each sub-view starting processing, step three, obtaining a sub-view of the attribute specification, step four, the size of sub-view is determined after obtaining the size of sub-view as the view object, step five, By the way, determining the intercepting screen the long width gWidth and height total height, creating a block bitmap object; and creating the bitmap object corresponding to the canvas object; and the sub-view pixel frame buffer are drawn in the canvas).
Regarding Claim 6, WU discloses the step of determining the image comprised in the rectangular area according to the coordinate of the first pixel point, the first preset length, and the second preset length comprises: determining a second view according to the coordinate of the first pixel point, the first preset length, and the second preset length; creating context information of a second bitmap through the second view, wherein a size of the second view is equal to an image size of the second bitmap; and converting the context information of the second bitmap into the image comprised in the rectangular area (figs. 2-3, paras. 51, 53, 55, 61, 66, 68 disclosing a display control method, comprising: step S210, providing a map region on an interactive interface; step S220, when a first touch-control event acting on the map region is detected, according to a touch-control point for the first touch-control event, determining from the map region a coordinate point corresponding to the touch-control point; and step S230, at a preset position on the interactive interface, enlarging a preset region which includes the coordinate point in the map region and displaying the enlarged region, so as to obtain a partial enlarged map, wherein the preset region may be understood as a local range comprising the coordinate point in the map region, and the preset region may be, for example, a circular region, a square region or a region of any shape in the map region that is centered on, para. 66 disclosing target touch points centered in a rectangular area (per para. 61) is an image area para. 68 disclosing image area is part of the scene, obviously discloses the touch point being a pixels point),
The prior art of GONG discloses the first preset length, and the second preset length ((please see paras. 6-18 disclosing a method for quickly capturing a multi-screen long image, the method comprising: determining the width (gWidth) and the height (total height) of a multi-screen long image that needs to be captured, creating a bitmap with the size of gWidth*total height, and then creating, by means of the bitmap, a canvas object associated with the bitmap; and then according to the attribute specification of a view, sequentially drawing sub-views in the created canvas object, and finally storing a drawn bitmap object).
prior art of GUO discloses a rectangular area centered on the touch coordinate according to the touch coordinate (please see para. 79 discloses after loading the transparent view, users can determine the cropping size and cropping coordinates through touch operations; that is, after loading the transparent view, users can determine the cropping area through touch operations on the transparent view; when the transparent view receives the cropping area determined by the user's touch operation, it can obtain the cropping size and cropping coordinates of the cropping area through relevant technologies. Figure 3 is a schematic diagram of the interactive interface for determining the cropping area according to an embodiment of the present invention. As shown in Figure 3, assuming the cropping area is a rectangle, the user can directly determine the diagonal of the cropping area by two-point touch (this obviously discloses preset length or width and height of the rectangle being formed); or, the user can determine the diagonal of the cropping area by two clicks respectively).
Regarding Claim 7, GONG discloses before the step of converting the context information of the second bitmap into the image comprised in the rectangular area, the map display method further comprises: obtaining image information of the second view; and outputting the image information of the second view to the context information of the second bitmap (please see paras. 6-18 disclosing a method for quickly capturing a multi-screen long image, the method comprising: determining the width (gWidth) and the height (total height) of a multi-screen long image that needs to be captured, creating a bitmap with the size of gWidth*total height, and then creating, by means of the bitmap, a canvas object associated with the bitmap; and then according to the attribute specification of a view, sequentially drawing sub-views in the created canvas object, and finally storing a drawn bitmap object, Further GONG at abstract, paras. 6-13, disclosing step one, obtaining the width of the current list view, and is stored in the gWidth variable, step two, obtaining all sub-view number, each sub-view starting processing, step three, obtaining a sub-view of the attribute specification, step four, the size of sub-view is determined after obtaining the size of sub-view as the view object, step five, By the way, determining the intercepting screen the long width gWidth and height total height, creating a block bitmap object; and creating the bitmap object corresponding to the canvas object; and the sub-view pixel frame buffer are drawn in the canvas).
prior art of GUO discloses a rectangular area centered on the touch coordinate according to the touch coordinate (please see para. 79 discloses after loading the transparent view, users can determine the cropping size and cropping coordinates through touch operations; that is, after loading the transparent view, users can determine the cropping area through touch operations on the transparent view; when the transparent view receives the cropping area determined by the user's touch operation, it can obtain the cropping size and cropping coordinates of the cropping area through relevant technologies. Figure 3 is a schematic diagram of the interactive interface for determining the cropping area according to an embodiment of the present invention. As shown in Figure 3, assuming the cropping area is a rectangle, the user can directly determine the diagonal of the cropping area by two-point touch (this obviously discloses preset length or width and height of the rectangle being formed); or, the user can determine the diagonal of the cropping area by two clicks respectively, thus discloses a method for acquiring a target bitmap, which method is performed at least by means of a canvas and comprises: capturing the current display page to generate a source bitmap, and cropping the current display page to obtain a target bitmap).
Regarding Claim 8,WU discloses the step of displaying the image comprised in the rectangular area in the target display area on the touch display screen ( comprises: enlarging the image comprised in the rectangular area according to a first ratio and displaying an enlarged image in the target display area, or reducing the image comprised in the rectangular area according to a second ratio and displaying a reduced image in the target display area point (figs. 2-3, paras. 51, 53, 55, 61, 66, 68 disclosing a display control method, comprising: step S210, providing a map region on an interactive interface; step S220, when a first touch-control event acting on the map region is detected, according to a touch-control point for the first touch-control event, determining from the map region a coordinate point corresponding to the touch-control point; and step S230, at a preset position on the interactive interface, enlarging a preset region which includes the coordinate point in the map region and displaying the enlarged region, so as to obtain a partial enlarged map, wherein the preset region may be understood as a local range comprising the coordinate point in the map region, and the preset region may be, for example, a circular region, a square region or a region of any shape in the map region that is centered on, para. 66 disclosing target touch points centered in a rectangular area (per para. 61) is an image area para. 68 disclosing image area is part of the scene, obviously discloses the touch point being a pixels point. Further At para. 69 discloses Based on the above description, if the touch points for click and swipe operations disappear, the coordinates cannot be determined based on the touch points. In this case, it can be assumed that the user no longer needs to precisely view a certain coordinate point. In order to reduce the display controls on the interactive interface, avoid the partial magnification image from obscuring other interface information, and improve screen utilization, the display of the partial magnification image can be stopped. In this example, you can stop displaying the zoomed-in view by lifting your finger, which is simple, convenient, and improves operating efficiency ).
Regarding Claim 9, WU discloses a storage medium, wherein a computer program is stored in the storage medium, and the computer program is run by a terminal device or a computer to perform the map display method (paras. 90-95, 101-103 disclosing a storage medium, wherein a computer program is stored in the storage medium, and the computer program is run by a terminal device or a computer to perform the map display method)
Regarding Claim 10, WU discloses an electronic apparatus, comprising a memory and a processor, wherein a computer program is stored in the memory, and the processor is configured to run the computer program to perform the map display method (paras. 90-95, 101-103 disclosing a storage medium, wherein a computer program is stored in the storage medium, and the computer program is run by a terminal device or a computer to perform the map display method)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Applicant is requested review cited prior art cited on USPTO 892.
The prior art of JEONG; Kyoung Jeon et al. (US 20180224298 A1) disclosure, paras. 78-223, discloses, selecting an arbitrary point on an electronic map, setting waypoints via a controller, and generating a travel route for an unmanned vehicle. A method for setting a target point on an electronic map, the method performed by an electronic apparatus equipped with a display on which the electronic map is displayed and commands are input through touch, the method may include: sensing a selection of one point on the electronic map; displaying a selection icon on the electronic map; receiving an input of a magnification change command of the electronic map through a first touch; and adjusting the electronic map by expanding or contracting the electronic map with respect to the one point in accordance with the magnification change command and displaying the electronic map on the display, wherein at least the first touch may be continuously maintained during the sensing of the selection of the one point and the receiving of the input of the magnification change command. The method may further include: sensing a movement of a position of the selection icon displayed on the adjusted electronic map; and setting the target point on a basis of the movement of the selection icon. The touch may be continuously maintained during the sensing of the selection of the one point, the receiving of the input of the magnification change command, and the sensing of the movement of the selection icon. The receiving of the input of the magnification change command may include: displaying a magnification change region on the electronic map adjacent to the one point when sensing the selection of the one point; and receiving the magnification change command through the magnification change region. The receiving of the input of the magnification change command may include: receiving a sliding touch input in which the first touch is moved continuously to the magnification change region while maintaining the touch, and wherein the adjusting of the electronic map may include: expanding the electronic map in response to sensing that the sliding touch input selects an expansion region of the magnification change region; contracting the electronic map in response to sensing that the sliding touch input selects a contraction region of the magnification change region; and stopping the expansion and contraction of the electronic map in response to the sliding touch input moving away from the magnification change region. The adjusting of the electronic map may include, in response to the sliding touch input moving away from the magnification change region, the selection icon displayed on the adjusted electronic map is switched to a movement mode, and wherein the selection icon may be movable on the electronic map in the movement mode. The setting of the target point may include, in response to releasing the touch input, setting a point at which the selection icon is finally located on the electronic map as the target point. The sensing the selection of one point may include, in response to a plurality of touches executed together on the display, a plurality of commands being input together to the electronic apparatus. The sensing the selection of one point may include: in response to sensing the first touch input on the display, selecting the detected point as the one point, and adjusting the electronic map may include: in response to sensing a second touch input, changing a magnification on the electronic map. The receiving of the input of the magnification change command may include: displaying a magnification change region adjacent to the selected one point on the electronic map in response to sensing of the selection of the one point, and wherein the adjusting of the electronic map may include: expanding the electronic map in response to sensing the second touch input in an expansion region of the magnification change region; and contracting the electronic map in response to sensing the second touch input in a contraction region of the magnification change region. The adjusting of the electronic map may include, in response to the second touch input not being applied for a predetermined time or more, switching the selection icon displayed on the adjusted electronic map to a movement mode, the selection icon may be movable on the electronic map. The receiving of the input of the magnification change command may include: expanding and contracting the electronic map in response to the selected one point touched on the display is pressurized with a pressure equal to or lower than a first reference pressure or equal to or higher than a second reference pressure. The receiving of the input of the magnification change command may include: in response to the touch being maintained at a pressure between the first and second reference pressure for a time longer than a certain reference time, switching the selection icon displayed on the adjusted electronic map to a movement mode, the selection icon being movable on the electronic map in the movement mode. The setting of the target point may include: automatically expanding or contracting the electronic map back to a magnification of the electronic apparatus before setting the target point is performed. A method for setting a target point on an electronic map, the method performed by an electronic apparatus equipped with a display on which the electronic map is displayed and a command is input via a touch, the method may include: sensing a selection and a movement of one point on the electronic map; displaying a first region for receiving a command for moving the one point and for sensing a return from a magnification change command input, and a second region for receiving a command for changing a magnification of the electronic map, the first region and the second region displayed adjacent to each other on the display, on a basis of the position of the detected one point; changing the magnification of the electronic map on the basis of the position of the one point in accordance with the input of the command sensed in the second region, and displaying the electronic map on the display at the changed magnification; and stopping a magnification change when a command input to the second region returns to the first region and is sensed in the first region.
The Prior art of Chiang Chung-Lin (US 20130135216 A1) disclosure; paras. 19-29, disclosing, A method for calculating a touch coordinate on a touch panel is provided, the touch panel having a plurality of points, said method comprising: determining a group of candidate points when a touch occurs on the touch panel, each candidate point having one sensing value; assigning weights to the sensing values of the respective candidate points to obtain weighted sensing values; and calculating a coordinate by utilizing the weighted sensing values and positions of the respective candidate points. By using said method, the calculation result of the touch coordinate will be more stable. A method for calculating a touch coordinate on a touch panel is provided, the touch panel having a plurality of points, said method comprising: determining a group of candidate points when a touch occurs on the touch panel, each candidate point having one sensing value; assigning weights to the sensing values of the respective candidate points to obtain weighted sensing values; and calculating a coordinate by utilizing the weighted sensing values and positions of the respective candidate points. The sensing values of the respective candidate points are weighted. In this manner, the calculation results of the touch coordinate in multiple calculations will be more converged to the same point. When an object touches the touch panel and presses down for a while, the calculated touch coordinate is less likely to drift all around. A method for calculating a touch coordinate on a touch panel of a touch sensitive device. The touch panel includes a number of points. One or more points are touched when a touch occurs on the touch panel. That is, there will be one or more touch points. The present invention is to determine a coordinate of the touch, which can be referred to as a "touch coordinate.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PRABODH M DHARIA whose telephone number is (571)272-7668. The examiner can normally be reached Monday -Friday 9:00 AM to 5:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benjamin Lee can be reached on 571-272-2963. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Any response to this action should be mailed to:
Commissioner of Patents and Trademarks
P.O. Box 1450
Alexandria VA 22313-1450
/Prabodh M Dharia/
Primary Examiner
Art Unit 2629
01-07-2026