1.0.0 release

This commit is contained in:
Kuba Gretzky 2020-04-15 10:22:22 +02:00
parent cc8907693e
commit aea4611977
1053 changed files with 427455 additions and 1 deletions

4
.gitignore vendored Normal file
View File

@ -0,0 +1,4 @@
build/pwndrop
build/pwndrop.exe
build/data/
release/

595
LICENSE Normal file
View File

@ -0,0 +1,595 @@
GNU General Public License
==========================
_Version 3, 29 June 2007_
_Copyright © 2007 Free Software Foundation, Inc. <http://fsf.org/>_
Everyone is permitted to copy and distribute verbatim copies of this license
document, but changing it is not allowed.
## Preamble
The GNU General Public License is a free, copyleft license for software and other
kinds of works.
The licenses for most software and other practical works are designed to take away
your freedom to share and change the works. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change all versions of a
program--to make sure it remains free software for all its users. We, the Free
Software Foundation, use the GNU General Public License for most of our software; it
applies also to any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not price. Our General
Public Licenses are designed to make sure that you have the freedom to distribute
copies of free software (and charge for them if you wish), that you receive source
code or can get it if you want it, that you can change the software or use pieces of
it in new free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you these rights or
asking you to surrender the rights. Therefore, you have certain responsibilities if
you distribute copies of the software, or if you modify it: responsibilities to
respect the freedom of others.
For example, if you distribute copies of such a program, whether gratis or for a fee,
you must pass on to the recipients the same freedoms that you received. You must make
sure that they, too, receive or can get the source code. And you must show them these
terms so they know their rights.
Developers that use the GNU GPL protect your rights with two steps: **(1)** assert
copyright on the software, and **(2)** offer you this License giving you legal permission
to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains that there is
no warranty for this free software. For both users' and authors' sake, the GPL
requires that modified versions be marked as changed, so that their problems will not
be attributed erroneously to authors of previous versions.
Some devices are designed to deny users access to install or run modified versions of
the software inside them, although the manufacturer can do so. This is fundamentally
incompatible with the aim of protecting users' freedom to change the software. The
systematic pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we have designed
this version of the GPL to prohibit the practice for those products. If such problems
arise substantially in other domains, we stand ready to extend this provision to
those domains in future versions of the GPL, as needed to protect the freedom of
users.
Finally, every program is threatened constantly by software patents. States should
not allow patents to restrict development and use of software on general-purpose
computers, but in those that do, we wish to avoid the special danger that patents
applied to a free program could make it effectively proprietary. To prevent this, the
GPL assures that patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and modification follow.
## TERMS AND CONDITIONS
### 0. Definitions
“This License” refers to version 3 of the GNU General Public License.
“Copyright” also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
“The Program” refers to any copyrightable work licensed under this
License. Each licensee is addressed as “you”. “Licensees” and
“recipients” may be individuals or organizations.
To “modify” a work means to copy from or adapt all or part of the work in
a fashion requiring copyright permission, other than the making of an exact copy. The
resulting work is called a “modified version” of the earlier work or a
work “based on” the earlier work.
A “covered work” means either the unmodified Program or a work based on
the Program.
To “propagate” a work means to do anything with it that, without
permission, would make you directly or secondarily liable for infringement under
applicable copyright law, except executing it on a computer or modifying a private
copy. Propagation includes copying, distribution (with or without modification),
making available to the public, and in some countries other activities as well.
To “convey” a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through a computer
network, with no transfer of a copy, is not conveying.
An interactive user interface displays “Appropriate Legal Notices” to the
extent that it includes a convenient and prominently visible feature that **(1)**
displays an appropriate copyright notice, and **(2)** tells the user that there is no
warranty for the work (except to the extent that warranties are provided), that
licensees may convey the work under this License, and how to view a copy of this
License. If the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
### 1. Source Code
The “source code” for a work means the preferred form of the work for
making modifications to it. “Object code” means any non-source form of a
work.
A “Standard Interface” means an interface that either is an official
standard defined by a recognized standards body, or, in the case of interfaces
specified for a particular programming language, one that is widely used among
developers working in that language.
The “System Libraries” of an executable work include anything, other than
the work as a whole, that **(a)** is included in the normal form of packaging a Major
Component, but which is not part of that Major Component, and **(b)** serves only to
enable use of the work with that Major Component, or to implement a Standard
Interface for which an implementation is available to the public in source code form.
A “Major Component”, in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system (if any) on which
the executable work runs, or a compiler used to produce the work, or an object code
interpreter used to run it.
The “Corresponding Source” for a work in object code form means all the
source code needed to generate, install, and (for an executable work) run the object
code and to modify the work, including scripts to control those activities. However,
it does not include the work's System Libraries, or general-purpose tools or
generally available free programs which are used unmodified in performing those
activities but which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for the work, and
the source code for shared libraries and dynamically linked subprograms that the work
is specifically designed to require, such as by intimate data communication or
control flow between those subprograms and other parts of the work.
The Corresponding Source need not include anything that users can regenerate
automatically from other parts of the Corresponding Source.
The Corresponding Source for a work in source code form is that same work.
### 2. Basic Permissions
All rights granted under this License are granted for the term of copyright on the
Program, and are irrevocable provided the stated conditions are met. This License
explicitly affirms your unlimited permission to run the unmodified Program. The
output from running a covered work is covered by this License only if the output,
given its content, constitutes a covered work. This License acknowledges your rights
of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not convey, without
conditions so long as your license otherwise remains in force. You may convey covered
works to others for the sole purpose of having them make modifications exclusively
for you, or provide you with facilities for running those works, provided that you
comply with the terms of this License in conveying all material for which you do not
control copyright. Those thus making or running the covered works for you must do so
exclusively on your behalf, under your direction and control, on terms that prohibit
them from making any copies of your copyrighted material outside their relationship
with you.
Conveying under any other circumstances is permitted solely under the conditions
stated below. Sublicensing is not allowed; section 10 makes it unnecessary.
### 3. Protecting Users' Legal Rights From Anti-Circumvention Law
No covered work shall be deemed part of an effective technological measure under any
applicable law fulfilling obligations under article 11 of the WIPO copyright treaty
adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention
of such measures.
When you convey a covered work, you waive any legal power to forbid circumvention of
technological measures to the extent such circumvention is effected by exercising
rights under this License with respect to the covered work, and you disclaim any
intention to limit operation or modification of the work as a means of enforcing,
against the work's users, your or third parties' legal rights to forbid circumvention
of technological measures.
### 4. Conveying Verbatim Copies
You may convey verbatim copies of the Program's source code as you receive it, in any
medium, provided that you conspicuously and appropriately publish on each copy an
appropriate copyright notice; keep intact all notices stating that this License and
any non-permissive terms added in accord with section 7 apply to the code; keep
intact all notices of the absence of any warranty; and give all recipients a copy of
this License along with the Program.
You may charge any price or no price for each copy that you convey, and you may offer
support or warranty protection for a fee.
### 5. Conveying Modified Source Versions
You may convey a work based on the Program, or the modifications to produce it from
the Program, in the form of source code under the terms of section 4, provided that
you also meet all of these conditions:
* **a)** The work must carry prominent notices stating that you modified it, and giving a
relevant date.
* **b)** The work must carry prominent notices stating that it is released under this
License and any conditions added under section 7. This requirement modifies the
requirement in section 4 to “keep intact all notices”.
* **c)** You must license the entire work, as a whole, under this License to anyone who
comes into possession of a copy. This License will therefore apply, along with any
applicable section 7 additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no permission to license the
work in any other way, but it does not invalidate such permission if you have
separately received it.
* **d)** If the work has interactive user interfaces, each must display Appropriate Legal
Notices; however, if the Program has interactive interfaces that do not display
Appropriate Legal Notices, your work need not make them do so.
A compilation of a covered work with other separate and independent works, which are
not by their nature extensions of the covered work, and which are not combined with
it such as to form a larger program, in or on a volume of a storage or distribution
medium, is called an “aggregate” if the compilation and its resulting
copyright are not used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work in an aggregate
does not cause this License to apply to the other parts of the aggregate.
### 6. Conveying Non-Source Forms
You may convey a covered work in object code form under the terms of sections 4 and
5, provided that you also convey the machine-readable Corresponding Source under the
terms of this License, in one of these ways:
* **a)** Convey the object code in, or embodied in, a physical product (including a
physical distribution medium), accompanied by the Corresponding Source fixed on a
durable physical medium customarily used for software interchange.
* **b)** Convey the object code in, or embodied in, a physical product (including a
physical distribution medium), accompanied by a written offer, valid for at least
three years and valid for as long as you offer spare parts or customer support for
that product model, to give anyone who possesses the object code either **(1)** a copy of
the Corresponding Source for all the software in the product that is covered by this
License, on a durable physical medium customarily used for software interchange, for
a price no more than your reasonable cost of physically performing this conveying of
source, or **(2)** access to copy the Corresponding Source from a network server at no
charge.
* **c)** Convey individual copies of the object code with a copy of the written offer to
provide the Corresponding Source. This alternative is allowed only occasionally and
noncommercially, and only if you received the object code with such an offer, in
accord with subsection 6b.
* **d)** Convey the object code by offering access from a designated place (gratis or for
a charge), and offer equivalent access to the Corresponding Source in the same way
through the same place at no further charge. You need not require recipients to copy
the Corresponding Source along with the object code. If the place to copy the object
code is a network server, the Corresponding Source may be on a different server
(operated by you or a third party) that supports equivalent copying facilities,
provided you maintain clear directions next to the object code saying where to find
the Corresponding Source. Regardless of what server hosts the Corresponding Source,
you remain obligated to ensure that it is available for as long as needed to satisfy
these requirements.
* **e)** Convey the object code using peer-to-peer transmission, provided you inform
other peers where the object code and Corresponding Source of the work are being
offered to the general public at no charge under subsection 6d.
A separable portion of the object code, whose source code is excluded from the
Corresponding Source as a System Library, need not be included in conveying the
object code work.
A “User Product” is either **(1)** a “consumer product”, which
means any tangible personal property which is normally used for personal, family, or
household purposes, or **(2)** anything designed or sold for incorporation into a
dwelling. In determining whether a product is a consumer product, doubtful cases
shall be resolved in favor of coverage. For a particular product received by a
particular user, “normally used” refers to a typical or common use of
that class of product, regardless of the status of the particular user or of the way
in which the particular user actually uses, or expects or is expected to use, the
product. A product is a consumer product regardless of whether the product has
substantial commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
“Installation Information” for a User Product means any methods,
procedures, authorization keys, or other information required to install and execute
modified versions of a covered work in that User Product from a modified version of
its Corresponding Source. The information must suffice to ensure that the continued
functioning of the modified object code is in no case prevented or interfered with
solely because modification has been made.
If you convey an object code work under this section in, or with, or specifically for
use in, a User Product, and the conveying occurs as part of a transaction in which
the right of possession and use of the User Product is transferred to the recipient
in perpetuity or for a fixed term (regardless of how the transaction is
characterized), the Corresponding Source conveyed under this section must be
accompanied by the Installation Information. But this requirement does not apply if
neither you nor any third party retains the ability to install modified object code
on the User Product (for example, the work has been installed in ROM).
The requirement to provide Installation Information does not include a requirement to
continue to provide support service, warranty, or updates for a work that has been
modified or installed by the recipient, or for the User Product in which it has been
modified or installed. Access to a network may be denied when the modification itself
materially and adversely affects the operation of the network or violates the rules
and protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided, in accord with
this section must be in a format that is publicly documented (and with an
implementation available to the public in source code form), and must require no
special password or key for unpacking, reading or copying.
### 7. Additional Terms
“Additional permissions” are terms that supplement the terms of this
License by making exceptions from one or more of its conditions. Additional
permissions that are applicable to the entire Program shall be treated as though they
were included in this License, to the extent that they are valid under applicable
law. If additional permissions apply only to part of the Program, that part may be
used separately under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option remove any
additional permissions from that copy, or from any part of it. (Additional
permissions may be written to require their own removal in certain cases when you
modify the work.) You may place additional permissions on material, added by you to a
covered work, for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you add to a
covered work, you may (if authorized by the copyright holders of that material)
supplement the terms of this License with terms:
* **a)** Disclaiming warranty or limiting liability differently from the terms of
sections 15 and 16 of this License; or
* **b)** Requiring preservation of specified reasonable legal notices or author
attributions in that material or in the Appropriate Legal Notices displayed by works
containing it; or
* **c)** Prohibiting misrepresentation of the origin of that material, or requiring that
modified versions of such material be marked in reasonable ways as different from the
original version; or
* **d)** Limiting the use for publicity purposes of names of licensors or authors of the
material; or
* **e)** Declining to grant rights under trademark law for use of some trade names,
trademarks, or service marks; or
* **f)** Requiring indemnification of licensors and authors of that material by anyone
who conveys the material (or modified versions of it) with contractual assumptions of
liability to the recipient, for any liability that these contractual assumptions
directly impose on those licensors and authors.
All other non-permissive additional terms are considered “further
restrictions” within the meaning of section 10. If the Program as you received
it, or any part of it, contains a notice stating that it is governed by this License
along with a term that is a further restriction, you may remove that term. If a
license document contains a further restriction but permits relicensing or conveying
under this License, you may add to a covered work material governed by the terms of
that license document, provided that the further restriction does not survive such
relicensing or conveying.
If you add terms to a covered work in accord with this section, you must place, in
the relevant source files, a statement of the additional terms that apply to those
files, or a notice indicating where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the form of a
separately written license, or stated as exceptions; the above requirements apply
either way.
### 8. Termination
You may not propagate or modify a covered work except as expressly provided under
this License. Any attempt otherwise to propagate or modify it is void, and will
automatically terminate your rights under this License (including any patent licenses
granted under the third paragraph of section 11).
However, if you cease all violation of this License, then your license from a
particular copyright holder is reinstated **(a)** provisionally, unless and until the
copyright holder explicitly and finally terminates your license, and **(b)** permanently,
if the copyright holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently
if the copyright holder notifies you of the violation by some reasonable means, this
is the first time you have received notice of violation of this License (for any
work) from that copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of
parties who have received copies or rights from you under this License. If your
rights have been terminated and not permanently reinstated, you do not qualify to
receive new licenses for the same material under section 10.
### 9. Acceptance Not Required for Having Copies
You are not required to accept this License in order to receive or run a copy of the
Program. Ancillary propagation of a covered work occurring solely as a consequence of
using peer-to-peer transmission to receive a copy likewise does not require
acceptance. However, nothing other than this License grants you permission to
propagate or modify any covered work. These actions infringe copyright if you do not
accept this License. Therefore, by modifying or propagating a covered work, you
indicate your acceptance of this License to do so.
### 10. Automatic Licensing of Downstream Recipients
Each time you convey a covered work, the recipient automatically receives a license
from the original licensors, to run, modify and propagate that work, subject to this
License. You are not responsible for enforcing compliance by third parties with this
License.
An “entity transaction” is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an organization, or
merging organizations. If propagation of a covered work results from an entity
transaction, each party to that transaction who receives a copy of the work also
receives whatever licenses to the work the party's predecessor in interest had or
could give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if the predecessor
has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the rights granted or
affirmed under this License. For example, you may not impose a license fee, royalty,
or other charge for exercise of rights granted under this License, and you may not
initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging
that any patent claim is infringed by making, using, selling, offering for sale, or
importing the Program or any portion of it.
### 11. Patents
A “contributor” is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The work thus
licensed is called the contributor's “contributor version”.
A contributor's “essential patent claims” are all patent claims owned or
controlled by the contributor, whether already acquired or hereafter acquired, that
would be infringed by some manner, permitted by this License, of making, using, or
selling its contributor version, but do not include claims that would be infringed
only as a consequence of further modification of the contributor version. For
purposes of this definition, “control” includes the right to grant patent
sublicenses in a manner consistent with the requirements of this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free patent license
under the contributor's essential patent claims, to make, use, sell, offer for sale,
import and otherwise run, modify and propagate the contents of its contributor
version.
In the following three paragraphs, a “patent license” is any express
agreement or commitment, however denominated, not to enforce a patent (such as an
express permission to practice a patent or covenant not to sue for patent
infringement). To “grant” such a patent license to a party means to make
such an agreement or commitment not to enforce a patent against the party.
If you convey a covered work, knowingly relying on a patent license, and the
Corresponding Source of the work is not available for anyone to copy, free of charge
and under the terms of this License, through a publicly available network server or
other readily accessible means, then you must either **(1)** cause the Corresponding
Source to be so available, or **(2)** arrange to deprive yourself of the benefit of the
patent license for this particular work, or **(3)** arrange, in a manner consistent with
the requirements of this License, to extend the patent license to downstream
recipients. “Knowingly relying” means you have actual knowledge that, but
for the patent license, your conveying the covered work in a country, or your
recipient's use of the covered work in a country, would infringe one or more
identifiable patents in that country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or arrangement, you
convey, or propagate by procuring conveyance of, a covered work, and grant a patent
license to some of the parties receiving the covered work authorizing them to use,
propagate, modify or convey a specific copy of the covered work, then the patent
license you grant is automatically extended to all recipients of the covered work and
works based on it.
A patent license is “discriminatory” if it does not include within the
scope of its coverage, prohibits the exercise of, or is conditioned on the
non-exercise of one or more of the rights that are specifically granted under this
License. You may not convey a covered work if you are a party to an arrangement with
a third party that is in the business of distributing software, under which you make
payment to the third party based on the extent of your activity of conveying the
work, and under which the third party grants, to any of the parties who would receive
the covered work from you, a discriminatory patent license **(a)** in connection with
copies of the covered work conveyed by you (or copies made from those copies), or **(b)**
primarily for and in connection with specific products or compilations that contain
the covered work, unless you entered into that arrangement, or that patent license
was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting any implied
license or other defenses to infringement that may otherwise be available to you
under applicable patent law.
### 12. No Surrender of Others' Freedom
If conditions are imposed on you (whether by court order, agreement or otherwise)
that contradict the conditions of this License, they do not excuse you from the
conditions of this License. If you cannot convey a covered work so as to satisfy
simultaneously your obligations under this License and any other pertinent
obligations, then as a consequence you may not convey it at all. For example, if you
agree to terms that obligate you to collect a royalty for further conveying from
those to whom you convey the Program, the only way you could satisfy both those terms
and this License would be to refrain entirely from conveying the Program.
### 13. Use with the GNU Affero General Public License
Notwithstanding any other provision of this License, you have permission to link or
combine any covered work with a work licensed under version 3 of the GNU Affero
General Public License into a single combined work, and to convey the resulting work.
The terms of this License will continue to apply to the part which is the covered
work, but the special requirements of the GNU Affero General Public License, section
13, concerning interaction through a network will apply to the combination as such.
### 14. Revised Versions of this License
The Free Software Foundation may publish revised and/or new versions of the GNU
General Public License from time to time. Such new versions will be similar in spirit
to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Program specifies that
a certain numbered version of the GNU General Public License “or any later
version” applies to it, you have the option of following the terms and
conditions either of that numbered version or of any later version published by the
Free Software Foundation. If the Program does not specify a version number of the GNU
General Public License, you may choose any version ever published by the Free
Software Foundation.
If the Program specifies that a proxy can decide which future versions of the GNU
General Public License can be used, that proxy's public statement of acceptance of a
version permanently authorizes you to choose that version for the Program.
Later license versions may give you additional or different permissions. However, no
additional obligations are imposed on any author or copyright holder as a result of
your choosing to follow a later version.
### 15. Disclaimer of Warranty
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE
QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE
DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
### 16. Limitation of Liability
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY
COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS
PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL,
INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE
OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE
WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
### 17. Interpretation of Sections 15 and 16
If the disclaimer of warranty and limitation of liability provided above cannot be
given local legal effect according to their terms, reviewing courts shall apply local
law that most closely approximates an absolute waiver of all civil liability in
connection with the Program, unless a warranty or assumption of liability accompanies
a copy of the Program in return for a fee.
_END OF TERMS AND CONDITIONS_
## How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest possible use to
the public, the best way to achieve this is to make it free software which everyone
can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest to attach them
to the start of each source file to most effectively state the exclusion of warranty;
and each file should have at least the “copyright” line and a pointer to
where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short notice like this
when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type 'show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type 'show c' for details.
The hypothetical commands `show w` and `show c` should show the appropriate parts of
the General Public License. Of course, your program's commands might be different;
for a GUI interface, you would use an “about box”.
You should also get your employer (if you work as a programmer) or school, if any, to
sign a “copyright disclaimer” for the program, if necessary. For more
information on this, and how to apply and follow the GNU GPL, see
<http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may consider it
more useful to permit linking proprietary applications with the library. If this is
what you want to do, use the GNU Lesser General Public License instead of this
License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.

24
Makefile Normal file
View File

@ -0,0 +1,24 @@
TARGET=pwndrop
BUILD_DIR=./build
.PHONY: all
all: build
build:
@echo "*** building..."
@GOARCH=amd64 go build -ldflags="-s -w" -o $(BUILD_DIR)/$(TARGET) -mod=vendor main.go
@rm -rf $(BUILD_DIR)/admin
@echo "*** copying new admin panel"
@cp -r ./www $(BUILD_DIR)/admin
@chmod 700 $(BUILD_DIR)/pwndrop
clean:
@go clean
@rm -rf ./build/
install:
@echo "*** stopping pwndrop"
@$(BUILD_DIR)/$(TARGET) stop
@echo "*** installing and starting pwndrop"
@$(BUILD_DIR)/$(TARGET) install && $(BUILD_DIR)/$(TARGET) start
@$(BUILD_DIR)/$(TARGET) status

154
README.md
View File

@ -1 +1,153 @@
Coming soon!
<p align="center">
<img alt="pwndrop logo" src="https://raw.githubusercontent.com/kgretzky/pwndrop/master/media/pwndrop-logo-512.png" height="120" />
<p align="center">
<img alt="pwndrop title" src="https://raw.githubusercontent.com/kgretzky/pwndrop/master/media/pwndrop-title-black-512.png" height="40" />
</p>
</p>
**pwndrop** is a self-deployable file hosting service for sending out red teaming payloads or securely sharing your private files over HTTP and WebDAV.
If you've ever needed to quickly set up an nginx/apache web server to host your files and you were never happy with the limitations of `python -m SimpleHTTPServer`, **pwndrop** is definitely for you!
<p align="center">
<img alt="demo" src="https://raw.githubusercontent.com/kgretzky/pwndrop/master/media/demo1.gif" height="500" />
</p>
With **pwndrop** you can:
- [x] Upload and immediately share multiple files using your own private VPS, using drag & drop.
- [x] Decide to make files available or unavailable for download with a single click.
- [x] Set up custom download URLs, for shared files, without playing with directory structure.
- [x] Set up facade files, which will be served instead of the original file whenever you feel like it.
- [x] Set up automatic redirects to spoof the file's extension in a shared link.
- [x] Change MIME type of the served file to change browser's behavior when a download link is clicked.
- [x] Serve files over HTTP, HTTPS and WebDAV.
- [x] Install and setup everything using a bash oneliner.
- [x] Set up **pwndrop** to work as a nameserver and respond with a valid DNS A record to any sub-domain you choose.
- [x] Protect your admin panel behind a custom secret URL path and log in securely with your own username and password.
- [x] Never worry about setting up HTTPS certificates as **pwndrop** does everything for you in the background (including auto-renewals).
Its main goal is to make file sharing as easy and intuitive as possible, while implementing extra features to aid in red team assessments.
Frontend of **pwndrop** is developed in pure Vue.js + Bootstrap with no npm or webpack dependencies. The backend serves REST API and manages a local database, powered by GO language.
## Write-up
If you want to learn how to use **pwndrop** or you want to learn what new features were implemented in recent releases, make sure to check out the posts on my blog:
https://breakdev.org/pwndrop
## Prerequisites
If you don't yet have the server to deploy to I highly recommend Digital Ocean. The cheapest $5/mo Debian 9 server with 25GB of storage space will work wonders for you. You can use my referral link to [get an extra $100 to spend on your servers in 60 days for free](https://m.do.co/c/50338abc7ffe).
Register a new domain and point its DNS A records to your VPS IP. You can also register a domain and point its `ns1` and `ns2` nameservers to **pwndrop** instance IP - it will automatically respond with valid DNS A replies.
1. Registered domain name pointing to **pwndrop** instance IP as a DNS A records or as a nameserver.
2. Server with at least 512 MB RAM.
If you want to set up **pwndrop** without a domain, check below how to set up a local instance, which will not auto-generate HTTPS certificates.
## Installation
Make sure there aren't any DNS or HTTP(S) servers running before you attempt to install **pwndrop**.
#### Oneliner
I do not recommend running oneliners, before downloading and checking the script code, but if you are really in a hurry, here it is:
```
curl https://github.com/kgretzky/pwndrop/install_linux.sh | sudo bash
```
This will download the latest amd64 release binary and fully install a daemon running in a background.
#### From binary
First you need to download the release package you want from: https://github.com/kgretzky/pwndrop/releases
Then do the following (this performs same actions to the oneliner):
```
tar zxvf pwndrop-linux-amd64.tar.gz
./pwndrop stop
./pwndrop install
./pwndrop start
./pwndrop status
```
#### From source code
First of all, make sure you have installed GO with version at least **1.13**: https://golang.org/doc/install
Then do the following:
```
git clone https://github.com/kgretzky/pwndrop
cd pwndrop
make
make install
```
## Quickstart
Make sure the **pwndrop** is running.
1. Open the secret URL to authorize your browser: `https://yourdomain.com/pwndrop` (this is a default value; make sure to use the secret path, you've pre-configured)
2. Open the admin panel URL in your browser: `https://yourdomain.com/` (since you've authorized your browser, you will now see an admin panel login page)
3. Create your admin account or login.
4. Click the configuration cog in top-left corner and make sure you change the secret path to something other than `/pwndrop`.
You're good to go!
## Running from CLI
You don't have to install **pwndrop** as a daemon and you can run it straight from the console.
```
usage: pwndrop [start|stop|install|remove|status] [-config <config_path>] [-debug] [-no-autocert] [-no-dns] [-h]
daemon management:
start : start the daemon
stop : stop the daemon
install : install the daemon using the available system manager (systemd, systemv and upstart supported)
remove : uninstall the daemon
status : check status of the installed daemon
parameters:
-config : specify a custom path to a config file (def. 'pwndrop.ini' in same directory as the executable)
-debug : enable debug output
-no-autocert : disable automatic TLS certificate retrieval from LetsEncrypt; useful when you want to connect over IP or/and in a local network
-no-dns : do not run a DNS server on port 53 UDP; use this if you don't want to use pwndrop as a nameserver
-h : usage help
```
## Configuration
On first launch, **pwndrop**, by default, will create a new configuration file `pwndrop.ini` in the same directory as an executable. You can later modify it or supply your own, for example to pre-configure **pwndrop** before the installation to automate the deployment of a tool even better.
Here is an example config file with all available config variables with commentary:
```
[pwndrop]
listen_ip = "190.33.86.22" # the external IP of your pwndrop instance (must be set if you want to use the nameserver feature)
http_port = 80 # listening port for HTTP and WebDAV
https_port = 443 # listening port for HTTPS
data_dir = "./data" # directory path where data storage will reside (relative paths are from executable directory path)
admin_dir = "./admin" # directory path where the admin panel files reside (relative paths are from executable directory path)
[setup] # optional: put in if you want to pre-configure pwndrop (section will be deleted from the config file on first run)
username = "admin" # username of the admin account
password = "secretpassword" # password of the admin account
redirect_url = "https://www.somedomain.com" # URL to which visitors will be redirected to if they supply a path, which doesn't point to any shared file (put blank if you want to return 404)
secret_path = "/pwndrop" # secret URL path, which upon visiting will allow your browser to access the login page of the admin panel (make sure to change the default value)
```
If you want to pre-configure your **pwndrop** instance before deployment using any of the installation scripts, put your configuration file at `/usr/local/pwndrop/pwndrop.ini` and it will be parsed the moment **pwndrop** daemon is first executed.
## Credits
Huge thanks to [**@jaredhaight**](https://twitter.com/jaredhaight) for inspiring me to learn Vue, with his [Faction C2](https://www.factionc2.com/) framework!
Also much thanks to all the people who gave me pre-release feedback and supported me with their opinions on the tool!
## License
**pwndrop** is made by Kuba Gretzky ([@mrgretzky](https://twitter.com/mrgretzky)) and it's released under GPL3 license.

53
api/api.go Normal file
View File

@ -0,0 +1,53 @@
package api
import (
"encoding/json"
"io"
"mime/multipart"
"net/http"
"os"
"github.com/kgretzky/pwndrop/config"
)
type ApiResponse struct {
ErrorCode int `json:"error_code"`
Message string `json:"message"`
Data interface{} `json:"data,omitempty"`
}
var Cfg *config.Config = nil
func SaveUploadedFile(file multipart.File, fhead *multipart.FileHeader, save_path string) error {
f, err := os.OpenFile(save_path, os.O_WRONLY|os.O_CREATE, 0644)
if err != nil {
return err
}
defer f.Close()
_, err = io.Copy(f, file)
if err != nil {
return err
}
return nil
}
func DumpResponse(w http.ResponseWriter, message string, http_status int, error_code int, o interface{}) {
resp := &ApiResponse{
ErrorCode: error_code,
Message: message,
Data: o,
}
d, err := json.Marshal(resp)
if err != nil {
http.Error(w, "corrupted response", http.StatusInternalServerError)
return
}
w.WriteHeader(http_status)
w.Header().Set("content-type", "application/json")
w.Write(d)
}
func SetConfig(cfg *config.Config) {
Cfg = cfg
}

238
api/auth.go Normal file
View File

@ -0,0 +1,238 @@
package api
import (
"encoding/json"
"fmt"
"golang.org/x/crypto/bcrypt"
"net/http"
"time"
//"github.com/gorilla/mux"
"github.com/kgretzky/pwndrop/log"
"github.com/kgretzky/pwndrop/storage"
"github.com/kgretzky/pwndrop/utils"
)
const AUTH_COOKIE_NAME = "t"
const AUTH_SESSION_TIMEOUT_SECS = 24 * 60 * 60
func AuthOptionsHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Access-Control-Allow-Methods", "GET,POST,PUT,PATCH,DELETE,OPTIONS")
}
func AuthCheckHandler(w http.ResponseWriter, r *http.Request) {
type AuthResponse struct {
Status int `json:"status"`
}
users, err := storage.UserList()
if err != nil {
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_DATABASE_FAILED, nil)
return
}
resp := &AuthResponse{}
if len(users) == 0 {
resp.Status = 0
DumpResponse(w, "ok", http.StatusOK, 0, resp)
return
}
_, err = AuthSession(r)
if err != nil {
DumpResponse(w, err.Error(), http.StatusUnauthorized, API_ERROR_BAD_AUTHENTICATION, nil)
return
}
resp.Status = 1
DumpResponse(w, "ok", http.StatusOK, 0, resp)
}
func LoginUserHandler(w http.ResponseWriter, r *http.Request) {
type LoginRequest struct {
Username string `json:"username"`
Password string `json:"password"`
}
type LoginResponse struct {
Username string `json:"username"`
Token string `json:"token"`
}
j := LoginRequest{}
err := json.NewDecoder(r.Body).Decode(&j)
if err != nil {
DumpResponse(w, err.Error(), http.StatusBadRequest, API_ERROR_BAD_REQUEST, nil)
return
}
log.Debug("username: %s", j.Username)
o, err := storage.UserGetByName(j.Username)
if err != nil {
DumpResponse(w, err.Error(), http.StatusUnauthorized, API_ERROR_BAD_AUTHENTICATION, nil)
return
}
err = bcrypt.CompareHashAndPassword([]byte(o.Password), []byte(j.Password))
if err != nil {
DumpResponse(w, err.Error(), http.StatusUnauthorized, API_ERROR_BAD_AUTHENTICATION, nil)
return
}
token := utils.GenRandomHash()
s := &storage.DbSession{
Uid: o.ID,
Token: token,
CreateTime: time.Now().Unix(),
}
_, err = storage.SessionCreate(s)
if err != nil {
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_DATABASE_FAILED, nil)
return
}
resp := &LoginResponse{
Username: o.Name,
Token: token,
}
ck := &http.Cookie{
Domain: "",
Path: "/",
MaxAge: 24 * 60 * 60,
HttpOnly: true,
Name: AUTH_COOKIE_NAME,
Value: token,
}
http.SetCookie(w, ck)
DumpResponse(w, "ok", http.StatusOK, 0, resp)
}
func LogoutUserHandler(w http.ResponseWriter, r *http.Request) {
ck, err := r.Cookie(AUTH_COOKIE_NAME)
if err != nil {
DumpResponse(w, err.Error(), http.StatusUnauthorized, API_ERROR_BAD_AUTHENTICATION, nil)
return
}
token := ck.Value
s, err := storage.SessionGetByToken(token)
if err != nil {
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_DATABASE_FAILED, nil)
return
}
err = storage.SessionDelete(s.ID)
if err != nil {
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_DATABASE_FAILED, nil)
return
}
deleteCookie(AUTH_COOKIE_NAME, w)
DumpResponse(w, "ok", http.StatusOK, 0, nil)
}
func ClearSecretSessionHandler(w http.ResponseWriter, r *http.Request) {
cookie_name := Cfg.GetCookieName()
deleteCookie(cookie_name, w)
DumpResponse(w, "ok", http.StatusOK, 0, nil)
}
func CreateUserHandler(w http.ResponseWriter, r *http.Request) {
type CreateUserRequest struct {
Username string `json:"username"`
Password string `json:"password"`
}
type CreateUserResponse struct {
Username string `json:"username"`
}
users, err := storage.UserList()
if err != nil {
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_DATABASE_FAILED, nil)
return
}
_, err = AuthSession(r)
if len(users) > 0 && err != nil {
DumpResponse(w, err.Error(), http.StatusUnauthorized, API_ERROR_BAD_AUTHENTICATION, nil)
return
}
j := CreateUserRequest{}
err = json.NewDecoder(r.Body).Decode(&j)
if err != nil {
DumpResponse(w, err.Error(), http.StatusBadRequest, API_ERROR_BAD_REQUEST, nil)
return
}
if j.Username == "" || j.Password == "" {
DumpResponse(w, "bad request", http.StatusBadRequest, API_ERROR_BAD_REQUEST, nil)
return
}
_, err = storage.UserGetByName(j.Username)
if err == nil {
DumpResponse(w, "user already exists", http.StatusOK, API_ERROR_USER_ALREADY_EXISTS, nil)
return
}
phash, err := bcrypt.GenerateFromPassword([]byte(j.Password), 10)
if err != nil {
DumpResponse(w, err.Error(), http.StatusBadRequest, API_ERROR_FILE_DATABASE_FAILED, nil)
return
}
o := &storage.DbUser{
Name: j.Username,
Password: string(phash),
}
_, err = storage.UserCreate(o)
if err != nil {
DumpResponse(w, err.Error(), http.StatusBadRequest, API_ERROR_FILE_DATABASE_FAILED, nil)
return
}
resp := &CreateUserResponse{
Username: j.Username,
}
DumpResponse(w, "ok", http.StatusOK, 0, resp)
}
func AuthSession(r *http.Request) (int, error) {
ck, err := r.Cookie(AUTH_COOKIE_NAME)
if err != nil {
return -1, err
}
token := ck.Value
s, err := storage.SessionGetByToken(token)
if err != nil {
return -1, err
}
if time.Now().After(time.Unix(s.CreateTime, 0).Add(AUTH_SESSION_TIMEOUT_SECS * time.Second)) {
storage.SessionDelete(s.ID)
return -1, fmt.Errorf("session token expired")
}
return s.Uid, nil
}
func deleteCookie(name string, w http.ResponseWriter) {
ck := &http.Cookie{
Domain: "",
Path: "/",
MaxAge: -1,
HttpOnly: true,
Name: name,
Value: "",
}
http.SetCookie(w, ck)
}

71
api/config.go Normal file
View File

@ -0,0 +1,71 @@
package api
import (
"encoding/json"
"net/http"
"github.com/kgretzky/pwndrop/storage"
"github.com/kgretzky/pwndrop/utils"
)
func ConfigOptionsHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Access-Control-Allow-Methods", "GET,POST,PUT,PATCH,DELETE,OPTIONS")
}
func ConfigGetHandler(w http.ResponseWriter, r *http.Request) {
// #### CHECK IF AUTHENTICATED ####
_, err := AuthSession(r)
if err != nil {
DumpResponse(w, "unauthorized", http.StatusUnauthorized, API_ERROR_BAD_AUTHENTICATION, nil)
return
}
o, err := storage.ConfigGet(1)
if err != nil {
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_DATABASE_FAILED, nil)
return
}
DumpResponse(w, "ok", http.StatusOK, 0, o)
}
func ConfigUpdateHandler(w http.ResponseWriter, r *http.Request) {
// #### CHECK IF AUTHENTICATED ####
_, err := AuthSession(r)
if err != nil {
DumpResponse(w, "unauthorized", http.StatusUnauthorized, API_ERROR_BAD_AUTHENTICATION, nil)
return
}
old_cfg, err := storage.ConfigGet(1)
if err != nil {
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_DATABASE_FAILED, nil)
return
}
o := storage.DbConfig{}
err = json.NewDecoder(r.Body).Decode(&o)
if err != nil {
DumpResponse(w, err.Error(), http.StatusBadRequest, API_ERROR_BAD_REQUEST, nil)
return
}
if o.SecretPath == "" || o.CookieName == "" || o.CookieToken == "" {
DumpResponse(w, "missing config variables", http.StatusBadRequest, API_ERROR_BAD_REQUEST, nil)
return
}
if o.SecretPath[0] != '/' {
o.SecretPath = "/" + o.SecretPath
}
if o.SecretPath != old_cfg.SecretPath {
o.CookieName = utils.GenRandomString(4)
o.CookieToken = utils.GenRandomHash()
}
ret, err := storage.ConfigUpdate(1, &o)
if err != nil {
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_DATABASE_FAILED, nil)
return
}
DumpResponse(w, "ok", http.StatusOK, 0, ret)
}

15
api/const.go Normal file
View File

@ -0,0 +1,15 @@
package api
import ()
const (
API_ERROR_SUCCESS = iota
API_ERROR_BAD_REQUEST
API_ERROR_FILE_SAVE_FAILED
API_ERROR_FILE_NOT_FOUND
API_ERROR_FILE_DATABASE_FAILED
API_ERROR_BAD_AUTHENTICATION
API_ERROR_PASSWORDS_DONT_MATCH
API_ERROR_USER_ALREADY_EXISTS
API_ERROR_SERVER_INFO_FAILED
)

291
api/files.go Normal file
View File

@ -0,0 +1,291 @@
package api
import (
"encoding/json"
"net/http"
"os"
"path/filepath"
"strconv"
"time"
"github.com/gorilla/mux"
"github.com/kgretzky/pwndrop/log"
"github.com/kgretzky/pwndrop/storage"
"github.com/kgretzky/pwndrop/utils"
)
func FileOptionsHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Access-Control-Allow-Methods", "GET,POST,PUT,PATCH,DELETE,OPTIONS")
}
func FileCreateHandler(w http.ResponseWriter, r *http.Request) {
// #### CHECK IF AUTHENTICATED ####
_, err := AuthSession(r)
if err != nil {
DumpResponse(w, "unauthorized", http.StatusUnauthorized, API_ERROR_BAD_AUTHENTICATION, nil)
return
}
data_dir := Cfg.GetDataDir()
user_id := 1
file, fhead, err := r.FormFile("file")
if err != nil {
DumpResponse(w, err.Error(), http.StatusBadRequest, API_ERROR_BAD_REQUEST, nil)
return
}
defer file.Close()
name := fhead.Filename
fname := utils.GenRandomHash()
url_path := "/" + utils.GenRandomString(8) + "/" + name // TODO: make sure the generated folder is unique
mime_type := fhead.Header.Get("content-type") //r.Header.Get("content-type")
if mime_type == "" {
mime_type = "application/octet-stream"
}
log.Debug("upload: " + mime_type)
os.Mkdir(filepath.Join(data_dir, "files"), 0700)
save_path := filepath.Join(data_dir, "files", fname)
if err := SaveUploadedFile(file, fhead, save_path); err != nil {
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_SAVE_FAILED, nil)
return
}
var fi os.FileInfo
if fi, err = os.Stat(save_path); err != nil {
os.Remove(save_path)
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_NOT_FOUND, nil)
return
}
o := &storage.DbFile{
Uid: user_id,
Name: name,
Filename: fname,
FileSize: fi.Size(),
UrlPath: url_path,
RedirectPath: "",
MimeType: mime_type,
SubMimeType: mime_type,
OrigMimeType: mime_type,
CreateTime: time.Now().Unix(),
IsEnabled: true,
IsPaused: false,
RefSubFile: 0,
}
f, err := storage.FileCreate(o)
if err != nil {
os.Remove(save_path)
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_DATABASE_FAILED, nil)
return
}
DumpResponse(w, "ok", http.StatusOK, 0, f)
}
func FileListHandler(w http.ResponseWriter, r *http.Request) {
// #### CHECK IF AUTHENTICATED ####
_, err := AuthSession(r)
if err != nil {
DumpResponse(w, "unauthorized", http.StatusUnauthorized, API_ERROR_BAD_AUTHENTICATION, nil)
return
}
files, err := storage.FileList()
if err != nil {
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_DATABASE_FAILED, nil)
return
}
type JsonFile struct {
storage.DbFile
SubFile *storage.DbSubFile `json:"sub_file"`
}
type Response struct {
Uploads []*JsonFile `json:"uploads"`
}
resp := &Response{}
for _, f := range files {
jo := &JsonFile{
DbFile: f,
}
if f.RefSubFile > 0 {
subf, err := storage.SubFileGet(f.RefSubFile)
if err == nil {
jo.SubFile = subf
}
}
resp.Uploads = append(resp.Uploads, jo)
}
DumpResponse(w, "ok", http.StatusOK, 0, resp)
}
func FileDeleteHandler(w http.ResponseWriter, r *http.Request) {
// #### CHECK IF AUTHENTICATED ####
_, err := AuthSession(r)
if err != nil {
DumpResponse(w, "unauthorized", http.StatusUnauthorized, API_ERROR_BAD_AUTHENTICATION, nil)
return
}
data_dir := Cfg.GetDataDir()
vars := mux.Vars(r)
id, err := strconv.Atoi(vars["id"])
if err != nil {
DumpResponse(w, err.Error(), http.StatusBadRequest, API_ERROR_BAD_REQUEST, nil)
return
}
f, err := storage.FileGet(id)
if err != nil {
DumpResponse(w, err.Error(), http.StatusNotFound, API_ERROR_FILE_NOT_FOUND, nil)
return
}
if f.RefSubFile > 0 {
err = DeleteSubFile(f.RefSubFile)
if err != nil {
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_DATABASE_FAILED, nil)
return
}
}
err = storage.FileDelete(id)
if err != nil {
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_DATABASE_FAILED, nil)
return
}
save_path := filepath.Join(data_dir, "files", f.Filename)
os.Remove(save_path)
DumpResponse(w, "ok", http.StatusOK, 0, nil)
}
func FileUpdateHandler(w http.ResponseWriter, r *http.Request) {
// #### CHECK IF AUTHENTICATED ####
_, err := AuthSession(r)
if err != nil {
DumpResponse(w, "unauthorized", http.StatusUnauthorized, API_ERROR_BAD_AUTHENTICATION, nil)
return
}
vars := mux.Vars(r)
id, err := strconv.Atoi(vars["id"])
if err != nil {
DumpResponse(w, err.Error(), http.StatusBadRequest, API_ERROR_BAD_REQUEST, nil)
return
}
file := storage.DbFile{}
err = json.NewDecoder(r.Body).Decode(&file)
if err != nil {
DumpResponse(w, err.Error(), http.StatusBadRequest, API_ERROR_BAD_REQUEST, nil)
return
}
if file.UrlPath[0] != '/' {
file.UrlPath = "/" + file.UrlPath
}
if len(file.RedirectPath) > 0 && file.RedirectPath[0] != '/' {
file.RedirectPath = "/" + file.RedirectPath
}
f, err := storage.FileUpdate(id, &file)
if err != nil {
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_DATABASE_FAILED, nil)
return
}
DumpResponse(w, "ok", http.StatusOK, 0, f)
}
func FileEnableHandler(w http.ResponseWriter, r *http.Request) {
// #### CHECK IF AUTHENTICATED ####
_, err := AuthSession(r)
if err != nil {
DumpResponse(w, "unauthorized", http.StatusUnauthorized, API_ERROR_BAD_AUTHENTICATION, nil)
return
}
vars := mux.Vars(r)
id, err := strconv.Atoi(vars["id"])
if err != nil {
DumpResponse(w, err.Error(), http.StatusBadRequest, API_ERROR_BAD_REQUEST, nil)
return
}
f, err := storage.FileEnable(id, true)
if err != nil {
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_DATABASE_FAILED, nil)
return
}
DumpResponse(w, "ok", http.StatusOK, 0, f)
}
func FileDisableHandler(w http.ResponseWriter, r *http.Request) {
// #### CHECK IF AUTHENTICATED ####
_, err := AuthSession(r)
if err != nil {
DumpResponse(w, "unauthorized", http.StatusUnauthorized, API_ERROR_BAD_AUTHENTICATION, nil)
return
}
vars := mux.Vars(r)
id, err := strconv.Atoi(vars["id"])
if err != nil {
DumpResponse(w, err.Error(), http.StatusBadRequest, API_ERROR_BAD_REQUEST, nil)
return
}
f, err := storage.FileEnable(id, false)
if err != nil {
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_DATABASE_FAILED, nil)
return
}
log.Debug("%v", f.IsEnabled)
DumpResponse(w, "ok", http.StatusOK, 0, f)
}
func FilePauseHandler(w http.ResponseWriter, r *http.Request) {
// #### CHECK IF AUTHENTICATED ####
_, err := AuthSession(r)
if err != nil {
DumpResponse(w, "unauthorized", http.StatusUnauthorized, API_ERROR_BAD_AUTHENTICATION, nil)
return
}
vars := mux.Vars(r)
id, err := strconv.Atoi(vars["id"])
if err != nil {
DumpResponse(w, err.Error(), http.StatusBadRequest, API_ERROR_BAD_REQUEST, nil)
return
}
f, err := storage.FilePause(id, true)
if err != nil {
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_DATABASE_FAILED, nil)
return
}
DumpResponse(w, "ok", http.StatusOK, 0, f)
}
func FileUnpauseHandler(w http.ResponseWriter, r *http.Request) {
// #### CHECK IF AUTHENTICATED ####
_, err := AuthSession(r)
if err != nil {
DumpResponse(w, "unauthorized", http.StatusUnauthorized, API_ERROR_BAD_AUTHENTICATION, nil)
return
}
vars := mux.Vars(r)
id, err := strconv.Atoi(vars["id"])
if err != nil {
DumpResponse(w, err.Error(), http.StatusBadRequest, API_ERROR_BAD_REQUEST, nil)
return
}
f, err := storage.FilePause(id, false)
if err != nil {
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_DATABASE_FAILED, nil)
return
}
DumpResponse(w, "ok", http.StatusOK, 0, f)
}

37
api/server_info.go Normal file
View File

@ -0,0 +1,37 @@
package api
import (
"net/http"
"github.com/shirou/gopsutil/disk"
)
func ServerInfoOptionsHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Access-Control-Allow-Methods", "GET,POST,PUT,PATCH,DELETE,OPTIONS")
}
func ServerInfoGetHandler(w http.ResponseWriter, r *http.Request) {
type ServerInfoResponse struct {
DiskUsed uint64 `json:"disk_used"`
DiskFree uint64 `json:"disk_free"`
}
// #### CHECK IF AUTHENTICATED ####
_, err := AuthSession(r)
if err != nil {
DumpResponse(w, "unauthorized", http.StatusUnauthorized, API_ERROR_BAD_AUTHENTICATION, nil)
return
}
dstat, err := disk.Usage("/")
if err != nil {
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_SERVER_INFO_FAILED, nil)
return
}
resp := &ServerInfoResponse{}
resp.DiskUsed = dstat.Used
resp.DiskFree = dstat.Free
DumpResponse(w, "ok", http.StatusOK, 0, resp)
}

146
api/subfiles.go Normal file
View File

@ -0,0 +1,146 @@
package api
import (
"net/http"
"os"
"path/filepath"
"strconv"
"time"
"github.com/gorilla/mux"
"github.com/kgretzky/pwndrop/log"
"github.com/kgretzky/pwndrop/storage"
"github.com/kgretzky/pwndrop/utils"
)
func SubFileCreateHandler(w http.ResponseWriter, r *http.Request) {
// #### CHECK IF AUTHENTICATED ####
_, err := AuthSession(r)
if err != nil {
DumpResponse(w, "unauthorized", http.StatusUnauthorized, API_ERROR_BAD_AUTHENTICATION, nil)
return
}
vars := mux.Vars(r)
data_dir := Cfg.GetDataDir()
user_id := 1
file, fhead, err := r.FormFile("file")
if err != nil {
DumpResponse(w, err.Error(), http.StatusBadRequest, API_ERROR_BAD_REQUEST, nil)
return
}
defer file.Close()
fid, err := strconv.Atoi(vars["id"])
if err != nil {
DumpResponse(w, err.Error(), http.StatusBadRequest, API_ERROR_BAD_REQUEST, nil)
return
}
parent_file, err := storage.FileGet(fid)
if err != nil {
DumpResponse(w, err.Error(), http.StatusBadRequest, API_ERROR_FILE_NOT_FOUND, nil)
return
}
name := fhead.Filename
fname := utils.GenRandomHash()
os.Mkdir(filepath.Join(data_dir, "files"), 0700)
save_path := filepath.Join(data_dir, "files", fname)
if err := SaveUploadedFile(file, fhead, save_path); err != nil {
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_SAVE_FAILED, nil)
return
}
var fi os.FileInfo
if fi, err = os.Stat(save_path); err != nil {
os.Remove(save_path)
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_NOT_FOUND, nil)
return
}
o := &storage.DbSubFile{
Fid: fid,
Uid: user_id,
Name: name,
Filename: fname,
FileSize: fi.Size(),
CreateTime: time.Now().Unix(),
}
f, err := storage.SubFileCreate(o)
if err != nil {
os.Remove(save_path)
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_DATABASE_FAILED, nil)
return
}
parent_file.SubName = f.Name
parent_file.RefSubFile = f.ID
log.Debug("ref_sub_file: %d", parent_file.RefSubFile)
_, err = storage.FileUpdate(fid, parent_file)
if err != nil {
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_SAVE_FAILED, nil)
return
}
DumpResponse(w, "ok", http.StatusOK, 0, f)
}
func SubFileDeleteHandler(w http.ResponseWriter, r *http.Request) {
// #### CHECK IF AUTHENTICATED ####
_, err := AuthSession(r)
if err != nil {
DumpResponse(w, "unauthorized", http.StatusUnauthorized, API_ERROR_BAD_AUTHENTICATION, nil)
return
}
vars := mux.Vars(r)
sub_id, err := strconv.Atoi(vars["sub_id"])
if err != nil {
DumpResponse(w, err.Error(), http.StatusBadRequest, API_ERROR_BAD_REQUEST, nil)
return
}
err = DeleteSubFile(sub_id)
if err != nil {
DumpResponse(w, err.Error(), http.StatusInternalServerError, API_ERROR_FILE_DATABASE_FAILED, nil)
return
}
files, err := storage.FileList()
if err == nil {
for _, f := range files {
if f.RefSubFile == sub_id {
storage.FilePause(f.ID, false)
}
}
}
DumpResponse(w, "ok", http.StatusOK, 0, nil)
}
func DeleteSubFile(sub_id int) error {
data_dir := Cfg.GetDataDir()
f, err := storage.SubFileGet(sub_id)
if err != nil {
return err
}
_, err = storage.FileResetSubFile(f.Fid)
if err != nil {
return err
}
err = storage.SubFileDelete(sub_id)
if err != nil {
return err
}
save_path := filepath.Join(data_dir, "files", f.Filename)
os.Remove(save_path)
return nil
}

22
api/version.go Normal file
View File

@ -0,0 +1,22 @@
package api
import (
"net/http"
"github.com/kgretzky/pwndrop/config"
)
func VersionOptionsHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Access-Control-Allow-Methods", "GET,POST,PUT,PATCH,DELETE,OPTIONS")
}
func VersionGetHandler(w http.ResponseWriter, r *http.Request) {
type VersionResponse struct {
Version string `json:"version"`
}
resp := &VersionResponse{}
resp.Version = config.Version
DumpResponse(w, "ok", http.StatusOK, 0, resp)
}

4
build.sh Normal file
View File

@ -0,0 +1,4 @@
#!/bin/bash
echo Building...
GOARCH=amd64 go build -ldflags="-s -w" -o ./build/pwndrop -mod=vendor main.go
echo Done.

3
config/banner.go Normal file
View File

@ -0,0 +1,3 @@
package config
const Version = "1.0.0"

241
config/config.go Normal file
View File

@ -0,0 +1,241 @@
package config
import (
"fmt"
"golang.org/x/crypto/bcrypt"
"gopkg.in/ini.v1"
"path/filepath"
"strconv"
"github.com/kgretzky/pwndrop/log"
"github.com/kgretzky/pwndrop/storage"
"github.com/kgretzky/pwndrop/utils"
)
const (
INI_SERVER = "pwndrop"
INI_VAR_LISTEN_IP = "listen_ip"
INI_VAR_HTTP_PORT = "http_port"
INI_VAR_HTTPS_PORT = "https_port"
INI_VAR_DATA_DIR = "data_dir"
INI_VAR_ADMIN_DIR = "admin_dir"
INI_SETUP = "setup"
INI_SETUP_USERNAME = "username"
INI_SETUP_PASSWORD = "password"
INI_SETUP_REDIRECT_URL = "redirect_url"
INI_SETUP_SECRET_PATH = "secret_path"
)
type Config struct {
ini *ini.File
path string
exec_dir string
}
func NewConfig(path string) (*Config, error) {
var err error
c := &Config{
path: path,
exec_dir: utils.GetExecDir(),
}
data_dir := filepath.Join(c.exec_dir, "data")
admin_dir := filepath.Join(c.exec_dir, "admin")
defs := map[string]string{
INI_VAR_LISTEN_IP: "",
INI_VAR_HTTP_PORT: "80",
INI_VAR_HTTPS_PORT: "443",
INI_VAR_DATA_DIR: data_dir,
INI_VAR_ADMIN_DIR: admin_dir,
}
c.ini, err = ini.Load(path)
if err != nil {
log.Warning("config file not found at path: %s", path)
c.ini = ini.Empty()
}
if _, err = c.ini.GetSection(INI_SERVER); err != nil {
c.ini.NewSection(INI_SERVER)
}
for k, v := range defs {
if _, err = c.ini.Section(INI_SERVER).GetKey(k); err != nil {
c.ini.Section(INI_SERVER).NewKey(k, v)
}
}
return c, nil
}
func (c *Config) HandleSetup() error {
if _, err := c.ini.GetSection(INI_SETUP); err == nil {
o, err := storage.ConfigGet(1)
if err != nil {
log.Error("config: can't get config from db")
}
var username, password, redirect_url, secret_path string
if k, err := c.ini.Section(INI_SETUP).GetKey(INI_SETUP_USERNAME); err == nil {
username = k.String()
}
if k, err := c.ini.Section(INI_SETUP).GetKey(INI_SETUP_PASSWORD); err == nil {
password = k.String()
}
if k, err := c.ini.Section(INI_SETUP).GetKey(INI_SETUP_REDIRECT_URL); err == nil {
redirect_url = k.String()
o.RedirectUrl = redirect_url
log.Important("setup: redirect url set to: %s", redirect_url)
}
if k, err := c.ini.Section(INI_SETUP).GetKey(INI_SETUP_SECRET_PATH); err == nil {
secret_path = k.String()
if secret_path[0] != '/' {
secret_path = "/" + secret_path
}
if len(secret_path) >= 2 {
o.CookieName = utils.GenRandomString(4)
o.CookieToken = utils.GenRandomHash()
o.SecretPath = secret_path
log.Important("setup: secret path set to: %s", secret_path)
}
}
if len(username) > 0 && len(password) > 0 {
phash, err := bcrypt.GenerateFromPassword([]byte(password), 10)
if err == nil {
o := &storage.DbUser{
Name: username,
Password: string(phash),
}
storage.UserDelete(1)
_, err = storage.UserCreate(o)
if err == nil {
log.Important("setup: created user account: %s", username)
} else {
log.Error("setup: failed to create user account: %s", err)
}
err = storage.SessionDeleteAll()
if err != nil {
log.Error("failed to delete active sessions: %s", err)
}
}
}
_, err = storage.ConfigUpdate(1, o)
if err != nil {
log.Error("config: can't save config to db")
}
c.ini.DeleteSection(INI_SETUP)
}
return nil
}
func (c *Config) Save() error {
err := c.ini.SaveTo(c.path)
if err != nil {
return fmt.Errorf("failed to save config file")
}
return nil
}
func (c *Config) GetListenIP() string {
s, _ := c.Get(INI_VAR_LISTEN_IP)
return s
}
func (c *Config) GetHttpPort() int {
s, _ := c.Get(INI_VAR_HTTP_PORT)
port, _ := strconv.Atoi(s)
return port
}
func (c *Config) GetHttpsPort() int {
s, _ := c.Get(INI_VAR_HTTPS_PORT)
port, _ := strconv.Atoi(s)
return port
}
func (c *Config) GetSecretPath() string {
o, err := storage.ConfigGet(1)
if err != nil {
return ""
}
return o.SecretPath
}
func (c *Config) GetDataDir() string {
dir, _ := c.Get(INI_VAR_DATA_DIR)
return c.joinPath(c.exec_dir, dir)
}
func (c *Config) GetAdminDir() string {
dir, _ := c.Get(INI_VAR_ADMIN_DIR)
return c.joinPath(c.exec_dir, dir)
}
func (c *Config) GetCookieName() string {
o, err := storage.ConfigGet(1)
if err != nil {
return ""
}
return o.CookieName
}
func (c *Config) GetCookieToken() string {
o, err := storage.ConfigGet(1)
if err != nil {
return ""
}
return o.CookieToken
}
func (c *Config) GetRedirectUrl() string {
o, err := storage.ConfigGet(1)
if err != nil {
return ""
}
return o.RedirectUrl
}
func (c *Config) Get(key string) (string, error) {
section, err := c.ini.GetSection(INI_SERVER)
if err != nil {
return "", err
}
if section.HasKey(key) {
return section.Key(key).String(), nil
}
return "", fmt.Errorf("config key '%s' not found", key)
}
func (c *Config) Set(key string, value string) error {
section, err := c.ini.GetSection(INI_SERVER)
if err != nil {
return err
}
if section.HasKey(key) {
section.Key(key).SetValue(value)
} else {
section.NewKey(key, value)
}
err = c.ini.SaveTo(c.path)
if err != nil {
return err
}
return nil
}
func (c *Config) joinPath(base_path string, rel_path string) string {
var ret string
if filepath.IsAbs(rel_path) {
ret = rel_path
} else {
ret = filepath.Join(base_path, rel_path)
}
return ret
}

32
core/certdb.go Normal file
View File

@ -0,0 +1,32 @@
package core
import (
"context"
"path/filepath"
"golang.org/x/crypto/acme/autocert"
)
type CertDb struct {
AutocertMgr autocert.Manager
}
func NewCertDb(cache_dir string) (*CertDb, error) {
cdb := &CertDb{
AutocertMgr: autocert.Manager{
Prompt: autocert.AcceptTOS,
Cache: autocert.DirCache(filepath.Join(cache_dir, "autocert")),
},
}
cdb.AutocertMgr.HostPolicy = cdb.hostPolicy
return cdb, nil
}
func (cdb *CertDb) SetManagedHostnames(hosts ...string) {
cdb.AutocertMgr.HostPolicy = autocert.HostWhitelist(hosts...)
}
func (cdb *CertDb) hostPolicy(ctx context.Context, host string) error {
// accept all hosts for TLS certificate retrieval
return nil
}

79
core/gen_cert.go Normal file
View File

@ -0,0 +1,79 @@
package core
import (
"crypto/rand"
"crypto/rsa"
"crypto/tls"
"crypto/x509"
"crypto/x509/pkix"
"fmt"
"io/ioutil"
"math/big"
"time"
"github.com/kgretzky/pwndrop/utils"
)
func GenerateTLSCertificate(common string) (*tls.Certificate, error) {
private_key, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
return nil, err
}
notBefore := time.Now()
aYear := time.Duration(10*365*24) * time.Hour
notAfter := notBefore.Add(aYear)
serialNumberLimit := new(big.Int).Lsh(big.NewInt(1), 128)
serialNumber, err := rand.Int(rand.Reader, serialNumberLimit)
if err != nil {
return nil, err
}
if common == "" {
common = utils.GenRandomString(8)
}
template := x509.Certificate{
SerialNumber: serialNumber,
Subject: pkix.Name{
Country: []string{},
Locality: []string{},
Organization: []string{},
OrganizationalUnit: []string{},
CommonName: common,
},
NotBefore: notBefore,
NotAfter: notAfter,
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageClientAuth, x509.ExtKeyUsageServerAuth},
BasicConstraintsValid: true,
IsCA: false,
}
cert, err := x509.CreateCertificate(rand.Reader, &template, &template, &private_key.PublicKey, private_key)
if err != nil {
return nil, err
}
ret_tls := &tls.Certificate{
Certificate: [][]byte{cert},
PrivateKey: private_key,
}
return ret_tls, nil
}
func LoadTLSCertificate(pub_path string, pkey_path string) (*tls.Certificate, error) {
pkey, err := ioutil.ReadFile(pkey_path)
if err != nil {
return nil, fmt.Errorf("TLS private key not found at: %s", pkey_path)
}
pubkey, err := ioutil.ReadFile(pub_path)
if err != nil {
return nil, fmt.Errorf("TLS public key not found at: %s", pub_path)
}
cert, err := tls.X509KeyPair(pubkey, pkey)
if err != nil {
return nil, err
}
return &cert, nil
}

7
core/globals.go Normal file
View File

@ -0,0 +1,7 @@
package core
import (
"github.com/kgretzky/pwndrop/config"
)
var Cfg *config.Config

56
core/http.go Normal file
View File

@ -0,0 +1,56 @@
package core
import (
"io"
"net/http"
"os"
"path/filepath"
"github.com/kgretzky/pwndrop/log"
)
type Http struct {
srv *Server
}
func NewHttp(srv *Server) (*Http, error) {
s := &Http{
srv: srv,
}
return s, nil
}
func (s *Http) ServeHTTP(w http.ResponseWriter, r *http.Request) {
data_dir := Cfg.GetDataDir()
if r.Method == "GET" {
f, status, err := s.srv.GetFile(r.URL.Path)
if err != nil {
w.WriteHeader(status)
log.Error("http: get: %s: %s", r.URL.Path, err)
return
}
if f.RedirectPath != "" && f.RedirectPath != r.URL.Path && !f.IsPaused {
http.Redirect(w, r, f.RedirectPath, http.StatusFound)
} else {
mime_type := f.MimeType
if f.IsPaused {
mime_type = f.SubMimeType
}
fpath := filepath.Join(data_dir, "files", f.Filename)
fo, err := os.Open(fpath)
//data, err := ioutil.ReadFile(fpath)
if err != nil {
log.Error("http: file: %s: %s", f.Filename, err)
return
}
defer fo.Close()
w.Header().Set("Content-Type", mime_type)
w.WriteHeader(200)
io.Copy(w, fo)
}
return
}
w.WriteHeader(404)
}

89
core/nameserver.go Normal file
View File

@ -0,0 +1,89 @@
package core
import (
"fmt"
"net"
"strconv"
"time"
"github.com/kgretzky/pwndrop/log"
"github.com/miekg/dns"
)
type Nameserver struct {
srv *dns.Server
serial uint32
}
func NewNameserver(ch_exit *chan bool) (*Nameserver, error) {
n := &Nameserver{
serial: uint32(time.Now().Unix()),
}
listen_ip := Cfg.GetListenIP()
dns_host := fmt.Sprintf("%s:%d", listen_ip, 53)
dns.HandleFunc(".", n.handleRequest)
log.Info("starting DNS server at UDP: %s", dns_host)
go func() {
n.srv = &dns.Server{Addr: dns_host, Net: "udp"}
if err := n.srv.ListenAndServe(); err != nil {
log.Fatal("failed to start nameserver at: %s", dns_host)
*ch_exit <- false
}
}()
return n, nil
}
func (n *Nameserver) handleRequest(w dns.ResponseWriter, r *dns.Msg) {
m := new(dns.Msg)
m.SetReply(r)
qdomain := m.Question[0].Name
listen_ip := Cfg.GetListenIP()
log.Debug("dns: %s listen_ip: %s", qdomain, listen_ip)
if Cfg.GetListenIP() == "" {
return
}
soa := &dns.SOA{
Hdr: dns.RR_Header{Name: qdomain, Rrtype: dns.TypeSOA, Class: dns.ClassINET, Ttl: 300},
Ns: "ns1." + qdomain,
Mbox: "hostmaster." + qdomain,
Serial: n.serial,
Refresh: 900,
Retry: 900,
Expire: 1800,
Minttl: 60,
}
m.Ns = []dns.RR{soa}
log.Debug("qtype: %d", r.Question[0].Qtype)
switch r.Question[0].Qtype {
case dns.TypeA:
log.Debug("DNS A: " + qdomain + " = " + listen_ip)
rr := &dns.A{
Hdr: dns.RR_Header{Name: qdomain, Rrtype: dns.TypeA, Class: dns.ClassINET, Ttl: 300},
A: net.ParseIP(listen_ip),
}
m.Answer = append(m.Answer, rr)
case dns.TypeNS:
log.Debug("DNS NS: " + qdomain)
for _, i := range []int{1, 2} {
rr := &dns.NS{
Hdr: dns.RR_Header{Name: qdomain, Rrtype: dns.TypeNS, Class: dns.ClassINET, Ttl: 300},
Ns: "ns" + strconv.Itoa(i) + "." + qdomain,
}
m.Answer = append(m.Answer, rr)
}
}
w.WriteMsg(m)
}
func pdom(domain string) string {
return domain + "."
}

261
core/server.go Normal file
View File

@ -0,0 +1,261 @@
package core
import (
"crypto/tls"
"fmt"
"net"
"net/http"
"path/filepath"
"strings"
"time"
"github.com/gorilla/mux"
"golang.org/x/crypto/acme"
"github.com/kgretzky/pwndrop/api"
"github.com/kgretzky/pwndrop/log"
"github.com/kgretzky/pwndrop/storage"
)
const (
API_PATH = "api/v1"
)
type Server struct {
srv *http.Server
listenTLS net.Listener
listen net.Listener
wdav *WebDav
http *Http
cdb *CertDb
ns *Nameserver
r *mux.Router
}
func NewServer(host string, port_plain int, port_tls int, enable_letsencrypt bool, enable_dns bool, ch_exit *chan bool) (*Server, error) {
var err error
s := &Server{}
hostname := fmt.Sprintf("%s:%d", host, port_plain)
hostname_tls := fmt.Sprintf("%s:%d", host, port_tls)
s.cdb, err = NewCertDb(Cfg.GetDataDir())
if err != nil {
return nil, err
}
cert, err := LoadTLSCertificate(filepath.Join(Cfg.GetDataDir(), "public.crt"), filepath.Join(Cfg.GetDataDir(), "private.key"))
if err != nil {
log.Warning("certificate: %s", err)
cert, err = GenerateTLSCertificate(host)
if err != nil {
return nil, err
}
log.Info("generated self-signed certificate")
} else {
log.Info("using TLS certificate from data directory")
enable_letsencrypt = false
}
tls_cfg := &tls.Config{}
tls_cfg.Certificates = append(tls_cfg.Certificates, *cert)
if enable_letsencrypt {
log.Info("autocert: enabled")
tls_cfg.GetCertificate = s.cdb.AutocertMgr.GetCertificate
tls_cfg.NextProtos = []string{
"h2", "http/1.1", // enable HTTP/2
acme.ALPNProto, // enable tls-alpn ACME challenges
}
} else {
log.Info("autocert: disabled")
}
s.wdav, err = NewWebDav(s)
if err != nil {
return nil, err
}
s.http, err = NewHttp(s)
if err != nil {
return nil, err
}
s.setupRouter()
s.srv = &http.Server{
Handler: http.Handler(s),
Addr: hostname,
WriteTimeout: 15 * time.Second,
ReadTimeout: 15 * time.Second,
TLSConfig: tls_cfg,
}
s.listenTLS, err = tls.Listen("tcp", hostname_tls, tls_cfg)
if err != nil {
return nil, err
}
s.listen, err = net.Listen("tcp", hostname)
if err != nil {
return nil, err
}
log.Info("starting HTTP/WebDAV server at %s", hostname)
log.Info("starting HTTPS server at %s", hostname_tls)
if enable_dns {
s.ns, err = NewNameserver(ch_exit)
if err != nil {
return nil, err
}
}
go func() {
err := s.srv.Serve(s.listen)
if err != nil {
log.Fatal("failed to start HTTP/WebDAV server at %s", hostname)
*ch_exit <- false
}
}()
go func() {
err := s.srv.Serve(s.listenTLS)
if err != nil {
log.Fatal("failed to start HTTPS server at %s", hostname_tls)
*ch_exit <- false
}
}()
return s, nil
}
func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
log.Debug("%s %s", r.Method, r.URL.Path)
if !s.isWebDavRequest(r) {
cookie_name := Cfg.GetCookieName()
cookie_token := Cfg.GetCookieToken()
if r.URL.Path == Cfg.GetSecretPath() {
ck := &http.Cookie{
Domain: "",
Path: "/",
Expires: time.Now().AddDate(0, 3, 0),
HttpOnly: true,
Name: cookie_name,
Value: cookie_token,
}
http.SetCookie(w, ck)
http.Redirect(w, r, "/", http.StatusFound)
return
}
if !s.FileExists(r.URL.Path) {
if ck, err := r.Cookie(cookie_name); err == nil {
if ck.Value == cookie_token {
s.r.ServeHTTP(w, r)
return
}
}
if len(Cfg.GetRedirectUrl()) > 0 {
http.Redirect(w, r, Cfg.GetRedirectUrl(), http.StatusFound)
return
}
}
// http
s.http.ServeHTTP(w, r)
} else {
// webdav
s.wdav.Handler().ServeHTTP(w, r)
}
}
func (s *Server) isWebDavRequest(r *http.Request) bool {
ua := r.Header.Get("user-agent")
if strings.Index(ua, "WebDAV") >= 0 || strings.Index(ua, "DavClnt") >= 0 {
return true
}
if r.Header.Get("translate") == "f" {
return true
}
return false
}
func (s *Server) setupRouter() {
admin_path := "/"
s.r = mux.NewRouter()
sr := s.r.PathPrefix(admin_path + API_PATH).Subrouter()
sr.HandleFunc("/auth", api.AuthOptionsHandler).Methods("OPTIONS")
sr.HandleFunc("/auth", api.AuthCheckHandler).Methods("GET")
sr.HandleFunc("/server_info", api.ServerInfoOptionsHandler).Methods("OPTIONS")
sr.HandleFunc("/server_info", api.ServerInfoGetHandler).Methods("GET")
sr.HandleFunc("/version", api.VersionOptionsHandler).Methods("OPTIONS")
sr.HandleFunc("/version", api.VersionGetHandler).Methods("GET")
sr.HandleFunc("/login", api.AuthOptionsHandler).Methods("OPTIONS")
sr.HandleFunc("/login", api.LoginUserHandler).Methods("POST")
sr.HandleFunc("/logout", api.LogoutUserHandler).Methods("GET")
sr.HandleFunc("/clear_secret", api.ClearSecretSessionHandler).Methods("GET")
sr.HandleFunc("/create_account", api.AuthOptionsHandler).Methods("OPTIONS")
sr.HandleFunc("/create_account", api.CreateUserHandler).Methods("POST")
sr.HandleFunc("/config", api.ConfigOptionsHandler).Methods("OPTIONS")
sr.HandleFunc("/config", api.ConfigGetHandler).Methods("GET")
sr.HandleFunc("/config", api.ConfigUpdateHandler).Methods("POST")
sr.HandleFunc("/files", api.FileOptionsHandler).Methods("OPTIONS")
sr.HandleFunc("/files", api.FileListHandler).Methods("GET")
sr.HandleFunc("/files", api.FileCreateHandler).Methods("POST")
sr.HandleFunc("/files/{id}", api.FileDeleteHandler).Methods("DELETE")
sr.HandleFunc("/files/{id}", api.FileUpdateHandler).Methods("PUT")
sr.HandleFunc("/files/{id}/sub", api.SubFileCreateHandler).Methods("POST")
sr.HandleFunc("/files/{id}/sub/{sub_id}", api.SubFileDeleteHandler).Methods("DELETE")
sr.HandleFunc("/files/{id}/enable", api.FileEnableHandler).Methods("GET")
sr.HandleFunc("/files/{id}/disable", api.FileDisableHandler).Methods("GET")
sr.HandleFunc("/files/{id}/pause", api.FilePauseHandler).Methods("GET")
sr.HandleFunc("/files/{id}/unpause", api.FileUnpauseHandler).Methods("GET")
s.r.PathPrefix(fmt.Sprintf("%s", admin_path)).Handler(http.StripPrefix(fmt.Sprintf("%s", admin_path), http.FileServer(http.Dir(Cfg.GetAdminDir()))))
}
func (s *Server) handleNotFound(w http.ResponseWriter, r *http.Request) {
log.Debug("%s %s", r.Method, r.URL.Path)
}
func (s *Server) GetFile(url string) (*storage.DbFile, int, error) {
is_redirect := false
f, err := storage.FileGetByUrl(url)
if err != nil {
f, err = storage.FileGetByRedirectUrl(url)
if err != nil {
return nil, 404, err
}
is_redirect = true
}
if !f.IsEnabled {
return nil, 404, fmt.Errorf("file is disabled")
}
if f.IsPaused {
if f.RedirectPath != "" && is_redirect {
return nil, 404, fmt.Errorf("can't access facade via redirect while paused")
} else if f.RefSubFile > 0 {
sf, err := storage.SubFileGet(f.RefSubFile)
if err != nil {
return nil, 404, fmt.Errorf("facade file not found")
}
f.Filename = sf.Filename
f.FileSize = sf.FileSize
} else {
return nil, 404, fmt.Errorf("facade file not set")
}
}
return f, 200, nil
}
func (s *Server) FileExists(url string) bool {
_, err := storage.FileGetByUrl(url)
if err != nil {
_, err = storage.FileGetByRedirectUrl(url)
if err != nil {
return false
}
}
return true
}

166
core/service.go Normal file
View File

@ -0,0 +1,166 @@
package core
import (
"os"
"path/filepath"
"runtime"
"github.com/kgretzky/pwndrop/log"
"github.com/kgretzky/pwndrop/utils"
"github.com/kgretzky/daemon"
"github.com/otiai10/copy"
)
const INSTALL_DIR = "/usr/local/pwndrop"
const EXEC_NAME = "pwndrop"
const ADMIN_DIR = "admin"
type Service struct {
Daemon daemon.Daemon
}
func (service *Service) Install() bool {
var err error
if runtime.GOOS == "windows" {
log.Error("daemons disabled on windows")
return false
}
exec_dir := utils.GetExecDir()
admin_dir := filepath.Join(exec_dir, "admin")
f, err := os.Stat(admin_dir)
if err != nil {
log.Error("can't find admin panel in current directory: %s", admin_dir)
return false
}
if !f.IsDir() {
log.Error("'%s' is not a directory", admin_dir)
return false
}
if _, err = os.Stat(INSTALL_DIR); os.IsNotExist(err) {
if err = os.Mkdir(INSTALL_DIR, 0700); err != nil {
log.Error("failed to create directory: %s", INSTALL_DIR)
return false
}
}
exec_path, _ := os.Executable()
exec_dst := filepath.Join(INSTALL_DIR, EXEC_NAME)
if err = copy.Copy(exec_path, exec_dst); err != nil {
log.Error("failed to copy '%s' to: %s", exec_path, exec_dst)
return false
}
log.Success("copied pwndrop executable to: %s", exec_dst)
admin_dst := filepath.Join(INSTALL_DIR, ADMIN_DIR)
if _, err = os.Stat(admin_dst); err == nil {
err = os.RemoveAll(admin_dst)
if err != nil {
log.Error("failed to delete admin panel at: %s", admin_dst)
return false
}
}
if err = copy.Copy(admin_dir, admin_dst); err != nil {
log.Error("failed to copy '%s' to: %s", admin_dir, admin_dst)
return false
}
log.Success("copied admin panel to: %s", admin_dst)
_, err = service.Daemon.Install(exec_dst)
if err != nil {
if err == daemon.ErrAlreadyInstalled {
log.Info("service already installed")
} else {
log.Error("failed to install daemon: %s", err)
return false
}
}
log.Success("successfully installed daemon")
return true
}
func (service *Service) Remove() bool {
var err error
if runtime.GOOS == "windows" {
log.Error("daemons disabled on windows")
return false
}
_, err = service.Daemon.Remove()
if err != nil {
log.Error("failed to install daemon: %s", err)
return false
}
if _, err = os.Stat(INSTALL_DIR); err == nil {
err = os.RemoveAll(INSTALL_DIR)
if err != nil {
log.Error("failed to delete directory: %s", INSTALL_DIR)
return false
}
} else {
log.Warning("directory not found: %s", INSTALL_DIR)
}
log.Success("deleted pwndrop directory")
log.Success("successfully removed daemon")
return true
}
func (service *Service) Start() bool {
if runtime.GOOS == "windows" {
log.Error("daemons disabled on windows")
return false
}
_, err := service.Daemon.Start()
if err != nil {
if err == daemon.ErrAlreadyRunning {
log.Info("daemon already running")
} else {
log.Error("failed to start daemon: %s", err)
return false
}
}
log.Success("pwndrop is running")
return true
}
func (service *Service) Stop() bool {
if runtime.GOOS == "windows" {
log.Error("daemons disabled on windows")
return false
}
_, err := service.Daemon.Stop()
if err != nil {
if err == daemon.ErrAlreadyStopped {
log.Info("daemon already stopped")
} else {
log.Error("failed to stop daemon: %s", err)
return false
}
}
log.Success("pwndrop stopped")
return true
}
func (service *Service) Status() bool {
if runtime.GOOS == "windows" {
log.Error("daemons disabled on windows")
return false
}
status, err := service.Daemon.Status()
if err != nil {
log.Error("failed to get daemon status: %s", err)
return false
}
log.Info("pwndrop status: %s", status)
return true
}

215
core/webdav.go Normal file
View File

@ -0,0 +1,215 @@
package core
import (
"context"
"fmt"
"net/http"
"os"
"path/filepath"
"time"
"golang.org/x/net/webdav"
"github.com/kgretzky/pwndrop/log"
"github.com/kgretzky/pwndrop/storage"
)
type WebDav struct {
srv *Server
h http.Handler
}
func NewWebDav(srv *Server) (*WebDav, error) {
s := &WebDav{
srv: srv,
}
fs := &WebDavFS{
srv: srv,
}
s.h = &webdav.Handler{
FileSystem: fs,
LockSystem: webdav.NewMemLS(),
Logger: func(r *http.Request, err error) {
if err != nil {
log.Debug("WEBDAV [%s]: %s, ERROR: %s", r.Method, r.URL, err)
} else {
log.Debug("WEBDAV [%s]: %s", r.Method, r.URL)
}
},
}
return s, nil
}
func (s *WebDav) Handler() http.Handler {
return s.h
}
// ----
type WebDavFS struct {
srv *Server
}
func (fs *WebDavFS) Mkdir(ctx context.Context, name string, perm os.FileMode) error {
return fmt.Errorf("mkdir: not supported")
}
func (fs *WebDavFS) OpenFile(ctx context.Context, name string, flag int, perm os.FileMode) (webdav.File, error) {
//return nil, fmt.Errorf("openfile: not supported")
log.Debug("openfile: %s %d %08x", name, flag, perm)
is_dir := storage.FileDirExists(name)
var fsize int64 = 0
f, _, err := fs.srv.GetFile(name)
if err != nil {
if !is_dir {
log.Error("webdav: %s", err)
return nil, err
}
} else {
fsize = f.FileSize
}
fi := &WebDavFileInfo{
name: name,
size: fsize,
isDir: is_dir,
modTime: time.Now(),
}
wf := &WebDavFile{
fi: fi,
}
if !is_dir {
data_dir := Cfg.GetDataDir()
fpath := filepath.Join(data_dir, "files", f.Filename)
wf.fh, err = os.OpenFile(fpath, flag, perm)
if err != nil {
log.Error("webdav: %s", err)
return nil, err
}
}
return wf, nil
}
func (fs *WebDavFS) RemoveAll(ctx context.Context, name string) error {
return fmt.Errorf("removeall: not supported")
}
func (fs *WebDavFS) Rename(ctx context.Context, oldName, newName string) error {
return fmt.Errorf("rename: not supported")
}
func (fs *WebDavFS) Stat(ctx context.Context, name string) (os.FileInfo, error) {
log.Debug("webdav: stat('%s')", name)
is_dir := false
if name == "" {
return nil, fmt.Errorf("invalid name")
}
if name[len(name)-1] == '/' {
is_dir = true
}
fi := &WebDavFileInfo{
name: name,
isDir: is_dir,
modTime: time.Now(),
}
is_dir = storage.FileDirExists(name)
if !is_dir {
f, _, err := fs.srv.GetFile(name)
if err != nil {
log.Error("webdav: %s", err)
return nil, err
}
fi.size = f.FileSize
}
return fi, nil
}
// ----
type WebDavFileInfo struct {
os.FileInfo
name string
size int64
isDir bool
modTime time.Time
}
func (fi *WebDavFileInfo) Name() string {
return fi.name
}
func (fi *WebDavFileInfo) Size() int64 {
return fi.size
}
func (fi *WebDavFileInfo) Mode() os.FileMode {
var ret os.FileMode = 0644
if fi.isDir {
ret |= os.ModeDir
}
return ret
}
func (fi *WebDavFileInfo) ModTime() time.Time {
return fi.modTime
}
func (fi *WebDavFileInfo) IsDir() bool {
return fi.isDir
}
func (fi *WebDavFileInfo) Sys() interface{} {
return nil
}
// ----
type WebDavFile struct {
webdav.File
fi os.FileInfo
fh *os.File
}
func (f *WebDavFile) Readdir(count int) ([]os.FileInfo, error) {
log.Debug("readdir: %d", count)
return []os.FileInfo{}, nil
}
func (f *WebDavFile) Stat() (os.FileInfo, error) {
log.Debug("webdavfile: stat()")
return f.fi, nil
}
func (f *WebDavFile) Close() error {
log.Debug("webdavfile: close()")
return f.fh.Close()
}
func (f *WebDavFile) Read(p []byte) (n int, err error) {
log.Debug("webdavfile: read()")
return f.fh.Read(p)
}
func (f *WebDavFile) Seek(offset int64, whence int) (int64, error) {
log.Debug("webdavfile: seek(%d, %d)", offset, whence)
return f.fh.Seek(offset, whence)
}
func (f *WebDavFile) Write(p []byte) (n int, err error) {
log.Debug("webdavfile: write()")
return 0, fmt.Errorf("write not supported")
}

35
go.mod Normal file
View File

@ -0,0 +1,35 @@
module github.com/kgretzky/pwndrop
go 1.12
require (
github.com/DataDog/zstd v1.4.0 // indirect
github.com/Sereal/Sereal v0.0.0-20190523190202-f3c5c39c5c5d // indirect
github.com/StackExchange/wmi v0.0.0-20190523213315-cbe66965904d // indirect
github.com/asdine/storm v2.1.2+incompatible
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/fatih/color v1.7.0
github.com/go-ole/go-ole v1.2.4 // indirect
github.com/golang/protobuf v1.3.1 // indirect
github.com/golang/snappy v0.0.1 // indirect
github.com/gorilla/mux v1.7.2
github.com/kgretzky/daemon v0.11.1
github.com/kr/pretty v0.1.0 // indirect
github.com/mattn/go-colorable v0.1.1 // indirect
github.com/mattn/go-isatty v0.0.7 // indirect
github.com/miekg/dns v1.1.12
github.com/otiai10/copy v1.0.2
github.com/shirou/gopsutil v2.19.12+incompatible
github.com/smartystreets/goconvey v0.0.0-20190330032615-68dc04aab96a // indirect
github.com/stretchr/testify v1.3.0 // indirect
github.com/vmihailenco/msgpack v4.0.4+incompatible // indirect
go.etcd.io/bbolt v1.3.2 // indirect
golang.org/x/crypto v0.0.0-20200117160349-530e935923ad
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e // indirect
golang.org/x/sys v0.0.0-20200124204421-9fbb57f87de9 // indirect
golang.org/x/text v0.3.2 // indirect
google.golang.org/appengine v1.6.0 // indirect
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 // indirect
gopkg.in/ini.v1 v1.42.0
)

89
go.sum Normal file
View File

@ -0,0 +1,89 @@
github.com/DataDog/zstd v1.4.0 h1:vhoV+DUHnRZdKW1i5UMjAk2G4JY8wN4ayRfYDNdEhwo=
github.com/DataDog/zstd v1.4.0/go.mod h1:1jcaCB/ufaK+sKp1NBhlGmpz41jOoPQ35bpF36t7BBo=
github.com/Sereal/Sereal v0.0.0-20190523190202-f3c5c39c5c5d h1:rxmiaDvFUOGEbZTX0dS4i3oLv5moVlCc0RexxyA79cM=
github.com/Sereal/Sereal v0.0.0-20190523190202-f3c5c39c5c5d/go.mod h1:D0JMgToj/WdxCgd30Kc1UcA9E+WdZoJqeVOuYW7iTBM=
github.com/StackExchange/wmi v0.0.0-20190523213315-cbe66965904d h1:G0m3OIz70MZUWq3EgK3CesDbo8upS2Vm9/P3FtgI+Jk=
github.com/StackExchange/wmi v0.0.0-20190523213315-cbe66965904d/go.mod h1:3eOhrUMpNV+6aFIbp5/iudMxNCF27Vw2OZgy4xEx0Fg=
github.com/asdine/storm v2.1.2+incompatible h1:dczuIkyqwY2LrtXPz8ixMrU/OFgZp71kbKTHGrXYt/Q=
github.com/asdine/storm v2.1.2+incompatible/go.mod h1:RarYDc9hq1UPLImuiXK3BIWPJLdIygvV3PsInK0FbVQ=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/fatih/color v1.7.0 h1:DkWD4oS2D8LGGgTQ6IvwJJXSL5Vp2ffcQg58nFV38Ys=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/go-ole/go-ole v1.2.4 h1:nNBDSCOigTSiarFpYE9J/KtEA1IOW4CNeqT9TQDqCxI=
github.com/go-ole/go-ole v1.2.4/go.mod h1:XCwSNxSkXRo4vlyPy93sltvi/qJq0jqQhjqQNIwKuxM=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1 h1:YF8+flBXS5eO826T4nzqPrxfhQThhXl0YzfuUPu4SBg=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/snappy v0.0.1 h1:Qgr9rKW7uDUkrbSmQeiDsGa8SjGyCOGtuasMWwvp2P4=
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/mux v1.7.2 h1:zoNxOV7WjqXptQOVngLmcSQgXmgk4NMz1HibBchjl/I=
github.com/gorilla/mux v1.7.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/kgretzky/daemon v0.11.0 h1:uPlsC5pIQPMQHvIboGR5idabjc8tzgUDFNsOGqIUuT8=
github.com/kgretzky/daemon v0.11.0/go.mod h1:kFBa3rQe5zq3llOqEW4AfuhkQG1vcBsUXFUR9ht3o1I=
github.com/kgretzky/daemon v0.11.1 h1:ZndVuOIO2kxbUw8PsKhQuaTp6DEvfvZxNI1ab32ky4g=
github.com/kgretzky/daemon v0.11.1/go.mod h1:kFBa3rQe5zq3llOqEW4AfuhkQG1vcBsUXFUR9ht3o1I=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/mattn/go-colorable v0.1.1 h1:G1f5SKeVxmagw/IyvzvtZE4Gybcc4Tr1tf7I8z0XgOg=
github.com/mattn/go-colorable v0.1.1/go.mod h1:FuOcm+DKB9mbwrcAfNl7/TZVBZ6rcnceauSikq3lYCQ=
github.com/mattn/go-isatty v0.0.5/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.7 h1:UvyT9uN+3r7yLEYSlJsbQGdsaB/a0DlgWP3pql6iwOc=
github.com/mattn/go-isatty v0.0.7/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/miekg/dns v1.1.12 h1:WMhc1ik4LNkTg8U9l3hI1LvxKmIL+f1+WV/SZtCbDDA=
github.com/miekg/dns v1.1.12/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/otiai10/copy v1.0.2 h1:DDNipYy6RkIkjMwy+AWzgKiNTyj2RUI9yEMeETEpVyc=
github.com/otiai10/copy v1.0.2/go.mod h1:c7RpqBkwMom4bYTSkLSym4VSJz/XtncWRAj/J4PEIMY=
github.com/otiai10/curr v0.0.0-20150429015615-9b4961190c95 h1:+OLn68pqasWca0z5ryit9KGfp3sUsW4Lqg32iRMJyzs=
github.com/otiai10/curr v0.0.0-20150429015615-9b4961190c95/go.mod h1:9qAhocn7zKJG+0mI8eUu6xqkFDYS2kb2saOteoSB3cE=
github.com/otiai10/mint v1.3.0 h1:Ady6MKVezQwHBkGzLFbrsywyp09Ah7rkmfjV3Bcr5uc=
github.com/otiai10/mint v1.3.0/go.mod h1:F5AjcsTsWUqX+Na9fpHb52P8pcRX2CI6A3ctIT91xUo=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/shirou/gopsutil v2.19.12+incompatible h1:WRstheAymn1WOPesh+24+bZKFkqrdCR8JOc77v4xV3Q=
github.com/shirou/gopsutil v2.19.12+incompatible/go.mod h1:5b4v6he4MtMOwMlS0TUMTu2PcXUg8+E1lC7eC3UO/RA=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d h1:zE9ykElWQ6/NYmHa3jpm/yHnI4xSofP+UP6SpjHcSeM=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v0.0.0-20190330032615-68dc04aab96a h1:pa8hGb/2YqsZKovtsgrwcDH1RZhVbTKCjLp47XpqCDs=
github.com/smartystreets/goconvey v0.0.0-20190330032615-68dc04aab96a/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/vmihailenco/msgpack v4.0.4+incompatible h1:dSLoQfGFAo3F6OoNhwUmLwVgaUXK79GlxNBwueZn0xI=
github.com/vmihailenco/msgpack v4.0.4+incompatible/go.mod h1:fy3FlTQTDXWkZ7Bh6AcGMlsjHatGryHQYUTf1ShIgkk=
go.etcd.io/bbolt v1.3.2 h1:Z/90sZLPOeCy2PwprqkFa25PdkusRzaj9P8zm/KNyvk=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20200117160349-530e935923ad h1:Jh8cai0fqIK+f6nG0UgPW5wFk8wmiMhM3AyciDBdtQg=
golang.org/x/crypto v0.0.0-20200117160349-530e935923ad/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa h1:F+8P+gmewFQYRk6JoLQLwjBCTu3mcIURZfNkVweuRKA=
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e h1:vcxGaoTs7kV8m5Np9uUNQin4BrLOthgV7252N8V+FwY=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200124204421-9fbb57f87de9 h1:1/DFK4b7JH8DmkqhUk48onnSfrPzImPoVxuomtbT2nk=
golang.org/x/sys v0.0.0-20200124204421-9fbb57f87de9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
google.golang.org/appengine v1.6.0 h1:Tfd7cKwKbFRsI8RMAD3oqqw7JPFRrvFlOsfbgVkjOOw=
google.golang.org/appengine v1.6.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/ini.v1 v1.42.0 h1:7N3gPTt50s8GuLortA00n8AqRTk75qOP98+mTPpgzRk=
gopkg.in/ini.v1 v1.42.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=

20
install_linux.sh Normal file
View File

@ -0,0 +1,20 @@
#!/bin/bash -e
FILENAME=pwndrop-linux-amd64
mkdir -p ${FILENAME}
cd ${FILENAME}
echo "*** downloading pwndrop."
wget https://github.com/kgretzky/pwndrop/releases/latest/download/${FILENAME}.tar.gz
echo "*** unpacking."
tar zxvf ${FILENAME}.tar.gz
cd pwndrop
chmod 700 pwndrop
echo "*** stopping pwndrop."
./pwndrop stop
echo "*** installing."
./pwndrop install
./pwndrop start
./pwndrop status
echo "*** cleaning up."
cd ../..
rm -rf ${FILENAME}/

178
log/log.go Normal file
View File

@ -0,0 +1,178 @@
package log
import (
"fmt"
"io"
"io/ioutil"
"log"
"os"
"sync"
"time"
"github.com/fatih/color"
)
var stdout io.Writer = color.Output
var stderr io.Writer = color.Error
var logf *os.File = nil
var mtx_log *sync.Mutex = &sync.Mutex{}
var enableOutput = true
var verboseLevel = INFO
const (
DEBUG = iota
INFO
IMPORTANT
WARNING
ERROR
FATAL
SUCCESS
)
const (
MOD_NONE = iota
MOD_SUCCESS
)
var LogLabels = map[int]string{
DEBUG: "dbg",
INFO: "inf",
IMPORTANT: "imp",
WARNING: "war",
ERROR: "err",
FATAL: "!!!",
}
func EnableOutput(enable bool) {
enableOutput = enable
}
func SetOutput(o io.Writer) {
stdout = o
}
func SetVerbosityLevel(lvl int) {
verboseLevel = lvl
}
func SetLogFile(path string) error {
f, err := os.OpenFile(path, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
if err != nil {
return err
}
logf = f
return nil
}
func NullLogger() *log.Logger {
return log.New(ioutil.Discard, "", 0)
}
func Debug(format string, args ...interface{}) {
mtx_log.Lock()
defer mtx_log.Unlock()
log_message(DEBUG, MOD_NONE, format, args...)
}
func Info(format string, args ...interface{}) {
mtx_log.Lock()
defer mtx_log.Unlock()
log_message(INFO, MOD_NONE, format, args...)
}
func Important(format string, args ...interface{}) {
mtx_log.Lock()
defer mtx_log.Unlock()
log_message(IMPORTANT, MOD_NONE, format, args...)
}
func Warning(format string, args ...interface{}) {
mtx_log.Lock()
defer mtx_log.Unlock()
log_message(WARNING, MOD_NONE, format, args...)
}
func Error(format string, args ...interface{}) {
mtx_log.Lock()
defer mtx_log.Unlock()
log_message(ERROR, MOD_NONE, format, args...)
}
func Fatal(format string, args ...interface{}) {
mtx_log.Lock()
defer mtx_log.Unlock()
log_message(FATAL, MOD_NONE, format, args...)
}
func Success(format string, args ...interface{}) {
mtx_log.Lock()
defer mtx_log.Unlock()
log_message(INFO, MOD_SUCCESS, format, args...)
}
func format_msg(lvl int, mod int, use_color bool, format string, args ...interface{}) string {
var sign, msg *color.Color
label := LogLabels[lvl]
switch lvl {
case DEBUG:
sign = color.New(color.FgBlack, color.BgHiBlack)
msg = color.New(color.Reset, color.FgHiBlack)
case INFO:
sign = color.New(color.FgGreen, color.BgBlack)
msg = color.New(color.Reset)
case IMPORTANT:
sign = color.New(color.FgWhite, color.BgHiBlue)
msg = color.New(color.Reset)
case WARNING:
sign = color.New(color.FgBlack, color.BgYellow)
msg = color.New(color.Reset)
case ERROR:
sign = color.New(color.FgWhite, color.BgRed)
msg = color.New(color.Reset, color.FgRed)
case FATAL:
sign = color.New(color.FgBlack, color.BgRed)
msg = color.New(color.Reset, color.FgRed, color.Bold)
}
if mod > MOD_NONE {
switch mod {
case MOD_SUCCESS:
sign = color.New(color.FgWhite, color.BgGreen)
msg = color.New(color.Reset, color.FgGreen)
label = "+++"
}
}
if use_color {
return "[" + sign.Sprintf("%s", label) + "] " + msg.Sprintf(format, args...)
} else {
return "[" + fmt.Sprintf("%s", label) + "] " + fmt.Sprintf(format, args...)
}
}
func log_message(lvl int, mod int, format string, args ...interface{}) {
if lvl < verboseLevel {
return
}
t := time.Now()
time_clr := color.New(color.Reset)
output := "[" + time_clr.Sprintf("%d-%02d-%02d %02d:%02d:%02d", t.Year(), t.Month(), t.Day(), t.Hour(), t.Minute(), t.Second()) + "] " + format_msg(lvl, mod, true, format, args...)
log_output := "[" + fmt.Sprintf("%d-%02d-%02d %02d:%02d:%02d", t.Year(), t.Month(), t.Day(), t.Hour(), t.Minute(), t.Second()) + "] " + format_msg(lvl, mod, false, format, args...)
if enableOutput {
fmt.Fprint(stdout, output+"\n")
}
if logf != nil {
logf.WriteString(log_output + "\n")
logf.Sync()
}
}

126
main.go Normal file
View File

@ -0,0 +1,126 @@
package main
import (
"flag"
"fmt"
"os"
"path/filepath"
"github.com/kgretzky/pwndrop/api"
"github.com/kgretzky/pwndrop/config"
"github.com/kgretzky/pwndrop/core"
"github.com/kgretzky/pwndrop/log"
"github.com/kgretzky/pwndrop/storage"
"github.com/kgretzky/pwndrop/utils"
"github.com/kgretzky/daemon"
)
const SERVICE_NAME = "pwndrop"
const SERVICE_DESCRIPTION = "pwndrop"
var cfg_path = flag.String("config", "", "config file path")
var debug_log = flag.Bool("debug", false, "log debug output")
var disable_autocert = flag.Bool("no-autocert", false, "disable automatic certificate retrieval")
var disable_dns = flag.Bool("no-dns", false, "disable DNS nameserver")
var show_help = flag.Bool("h", false, "show help")
func usage() {
fmt.Printf("usage: pwndrop [start|stop|install|remove|status] [-config <config_path>] [-debug] [-no-autocert] [-no-dns] [-h]\n\n")
}
func main() {
var err error
ch_exit := make(chan bool, 1)
dmn, err := daemon.New(SERVICE_NAME, SERVICE_DESCRIPTION, "network.target")
if err != nil {
log.Error("daemon: %s", err)
os.Exit(1)
return
}
svc := &core.Service{dmn}
if len(os.Args) > 1 {
var ret bool = false
var svc_cmd bool = false
cmd := os.Args[1]
switch cmd {
case "install":
ret = svc.Install()
svc_cmd = true
case "remove":
ret = svc.Remove()
svc_cmd = true
case "start":
ret = svc.Start()
svc_cmd = true
case "stop":
ret = svc.Stop()
svc_cmd = true
case "status":
ret = svc.Status()
svc_cmd = true
}
if svc_cmd {
if ret {
os.Exit(0)
} else {
os.Exit(1)
}
}
}
flag.Parse()
if *show_help {
usage()
return
}
if *cfg_path == "" {
*cfg_path = utils.ExecPath("pwndrop.ini")
}
log.Info("pwndrop version: %s", config.Version)
core.Cfg, err = config.NewConfig(*cfg_path)
if err != nil {
log.Fatal("config: %v", err)
os.Exit(1)
return
}
api.SetConfig(core.Cfg)
if *debug_log {
log.SetVerbosityLevel(0)
}
db_path := filepath.Join(core.Cfg.GetDataDir(), "pwndrop.db")
log.Info("opening database at: %s", db_path)
storage.Open(db_path)
core.Cfg.HandleSetup()
if err = core.Cfg.Save(); err != nil {
log.Fatal("config: %v", err)
os.Exit(1)
return
}
listen_ip := core.Cfg.GetListenIP()
log.Debug("listen_ip: %s", listen_ip)
port_http := core.Cfg.GetHttpPort()
port_https := core.Cfg.GetHttpsPort()
_, err = core.NewServer(listen_ip, port_http, port_https, !(*disable_autocert), !(*disable_dns), &ch_exit)
if err != nil {
log.Fatal("%v", err)
os.Exit(1)
return
}
select {
case _ = <-ch_exit:
log.Fatal("aborting")
os.Exit(1)
return
}
}

76
storage/db.go Normal file
View File

@ -0,0 +1,76 @@
package storage
import (
"os"
"path/filepath"
"github.com/asdine/storm"
"github.com/kgretzky/pwndrop/log"
"github.com/kgretzky/pwndrop/utils"
)
var db *storm.DB
func Open(path string) error {
var err error
err = os.MkdirAll(filepath.Dir(path), 0600)
if err != nil {
return err
}
db, err = storm.Open(path)
if err != nil {
return err
}
err = db.Init(&DbFile{})
if err != nil {
return err
}
err = db.Init(&DbSubFile{})
if err != nil {
return err
}
err = db.Init(&DbUser{})
if err != nil {
return err
}
err = db.Init(&DbSession{})
if err != nil {
return err
}
err = db.Init(&DbConfig{})
if err != nil {
return err
}
// initialize config
err = initConfig()
if err != nil {
return err
}
return nil
}
func initConfig() error {
o, err := ConfigGet(1)
if err != nil {
o = &DbConfig{
ID: 1,
SecretPath: "/pwndrop",
RedirectUrl: "https://www.youtube.com/watch?v=oHg5SJYRHA0",
CookieName: utils.GenRandomString(4),
CookieToken: utils.GenRandomHash(),
}
_, err = ConfigCreate(o)
if err != nil {
return err
}
}
log.Debug("secret_path: %s", o.SecretPath)
// update added values here in future updates
return nil
}

46
storage/db_config.go Normal file
View File

@ -0,0 +1,46 @@
package storage
type DbConfig struct {
ID int `json:"id" storm:"id"`
Hostname string `json:"hostname"`
SecretPath string `json:"secret_path"`
RedirectUrl string `json:"redirect_url"`
CookieName string `json:"cookie_name"`
CookieToken string `json:"cookie_token"`
}
func ConfigCreate(o *DbConfig) (*DbConfig, error) {
err := db.Save(o)
if err != nil {
return nil, err
}
return o, nil
}
func ConfigGet(id int) (*DbConfig, error) {
var o DbConfig
err := db.One("ID", id, &o)
if err != nil {
return nil, err
}
return &o, nil
}
func ConfigUpdate(id int, o *DbConfig) (*DbConfig, error) {
o.ID = id
if err := db.Save(o); err != nil {
return nil, err
}
return o, nil
}
func ConfigDelete(id int) error {
o := &DbConfig{
ID: id,
}
err := db.DeleteStruct(o)
if err != nil {
return err
}
return nil
}

149
storage/db_file.go Normal file
View File

@ -0,0 +1,149 @@
package storage
import (
"github.com/kgretzky/pwndrop/log"
)
type DbFile struct {
ID int `json:"id" storm:"id,increment"`
Uid int `json:"uid" storm:"index"`
Name string `json:"name"`
Filename string `json:"fname"`
FileSize int64 `json:"fsize"`
UrlPath string `json:"url_path" storm:"unique"`
MimeType string `json:"mime_type"`
OrigMimeType string `json:"orig_mime_type"`
CreateTime int64 `json:"create_time" storm:"index"`
IsEnabled bool `json:"is_enabled"`
IsPaused bool `json:"is_paused"`
RedirectPath string `json:"redirect_path" storm:"unique"`
SubName string `json:"sub_name"`
SubMimeType string `json:"sub_mime_type"`
RefSubFile int `json:"ref_sub_file"`
}
func FileCreate(o *DbFile) (*DbFile, error) {
err := db.Save(o)
if err != nil {
return nil, err
}
log.Debug("file id: %d", o.ID)
return o, nil
}
func FileList() ([]DbFile, error) {
var dbos []DbFile
err := db.All(&dbos)
if err != nil {
return nil, err
}
/*
for _, dbo := range dbos {
log.Debug("filelist: sub_id: %d", dbo.RefSubFile)
if dbo.RefSubFile > 0 {
subf, err := SubFileGet(f.RefSubFile)
if err == nil {
jf.SubFile = subf
}
}
ret = append(ret, dbo)
}*/
return dbos, nil
}
func FileGet(id int) (*DbFile, error) {
var o DbFile
err := db.One("ID", id, &o)
if err != nil {
return nil, err
}
return &o, nil
}
func FileGetByUrl(url string) (*DbFile, error) {
var o DbFile
err := db.One("UrlPath", url, &o)
if err != nil {
return nil, err
}
return &o, nil
}
func FileGetByRedirectUrl(url string) (*DbFile, error) {
var o DbFile
err := db.One("RedirectPath", url, &o)
if err != nil {
return nil, err
}
return &o, nil
}
func FileDirExists(url string) bool {
var o []DbFile
if url == "" {
return false
}
if url[len(url)-1] != '/' {
url += "/"
}
err := db.Prefix("UrlPath", url, &o)
if err != nil {
return false
}
return true
}
func FileDelete(id int) error {
f := &DbFile{
ID: id,
}
err := db.DeleteStruct(f)
if err != nil {
return err
}
return nil
}
func FileUpdate(id int, o *DbFile) (*DbFile, error) {
if err := db.Update(&DbFile{ID: id, Name: o.Name, UrlPath: o.UrlPath, MimeType: o.MimeType, RefSubFile: o.RefSubFile, SubName: o.SubName, RedirectPath: o.RedirectPath, SubMimeType: o.SubMimeType}); err != nil {
return nil, err
}
if err := db.UpdateField(&DbFile{ID: id}, "RedirectPath", o.RedirectPath); err != nil {
return nil, err
}
return o, nil
}
func FileResetSubFile(id int) (*DbFile, error) {
if err := db.UpdateField(&DbFile{ID: id}, "RefSubFile", 0); err != nil {
return nil, err
}
o, err := FileGet(id)
if err != nil {
return nil, err
}
return o, nil
}
func FileEnable(id int, enable bool) (*DbFile, error) {
if err := db.UpdateField(&DbFile{ID: id}, "IsEnabled", enable); err != nil {
return nil, err
}
o, err := FileGet(id)
if err != nil {
return nil, err
}
return o, nil
}
func FilePause(id int, pause bool) (*DbFile, error) {
if err := db.UpdateField(&DbFile{ID: id}, "IsPaused", pause); err != nil {
return nil, err
}
o, err := FileGet(id)
if err != nil {
return nil, err
}
return o, nil
}

53
storage/db_session.go Normal file
View File

@ -0,0 +1,53 @@
package storage
type DbSession struct {
ID int `json:"id" storm:"id,increment"`
Uid int `json:"uid" storm:"index"`
Token string `json:"token" storm:"unique"`
CreateTime int64 `json:"create_time"`
}
func SessionCreate(o *DbSession) (*DbSession, error) {
err := db.Save(o)
if err != nil {
return nil, err
}
return o, nil
}
func SessionGet(id int) (*DbSession, error) {
var o DbSession
err := db.One("ID", id, &o)
if err != nil {
return nil, err
}
return &o, nil
}
func SessionGetByToken(token string) (*DbSession, error) {
var o DbSession
err := db.One("Token", token, &o)
if err != nil {
return nil, err
}
return &o, nil
}
func SessionDelete(id int) error {
o := &DbSession{
ID: id,
}
err := db.DeleteStruct(o)
if err != nil {
return err
}
return nil
}
func SessionDeleteAll() error {
err := db.Drop("DbSession")
if err != nil {
return err
}
return nil
}

40
storage/db_subfile.go Normal file
View File

@ -0,0 +1,40 @@
package storage
type DbSubFile struct {
ID int `json:"id" storm:"id,increment"`
Fid int `json:"fid"`
Uid int `json:"uid"`
Name string `json:"name"`
Filename string `json:"fname"`
FileSize int64 `json:"fsize"`
CreateTime int64 `json:"create_time"`
}
func SubFileCreate(o *DbSubFile) (*DbSubFile, error) {
err := db.Save(o)
if err != nil {
return nil, err
}
return o, nil
}
func SubFileGet(id int) (*DbSubFile, error) {
var o DbSubFile
err := db.One("ID", id, &o)
if err != nil {
return nil, err
}
return &o, nil
}
func SubFileDelete(id int) error {
f := &DbSubFile{
ID: id,
}
err := db.DeleteStruct(f)
if err != nil {
return err
}
return nil
}

59
storage/db_user.go Normal file
View File

@ -0,0 +1,59 @@
package storage
import (
"strings"
)
type DbUser struct {
ID int `json:"id" storm:"id,increment"`
Name string `json:"name"`
SearchName string `json:"search_name" storm:"unique"`
Password string `json:"password"`
}
func UserCreate(o *DbUser) (*DbUser, error) {
o.SearchName = strings.ToLower(o.Name)
err := db.Save(o)
if err != nil {
return nil, err
}
return o, nil
}
func UserList() ([]DbUser, error) {
var os []DbUser
err := db.All(&os)
if err != nil {
return nil, err
}
return os, nil
}
func UserGet(id int) (*DbUser, error) {
var o DbUser
err := db.One("ID", id, &o)
if err != nil {
return nil, err
}
return &o, nil
}
func UserGetByName(username string) (*DbUser, error) {
var o DbUser
err := db.One("SearchName", strings.ToLower(username), &o)
if err != nil {
return nil, err
}
return &o, nil
}
func UserDelete(id int) error {
o := &DbUser{
ID: id,
}
err := db.DeleteStruct(o)
if err != nil {
return err
}
return nil
}

58
utils/utils.go Normal file
View File

@ -0,0 +1,58 @@
package utils
import (
"crypto/rand"
"crypto/sha256"
"encoding/binary"
"fmt"
"io/ioutil"
"os"
"path/filepath"
)
func GenRandomHash() string {
rdata := make([]byte, 64)
rand.Read(rdata)
hash := sha256.Sum256(rdata)
token := fmt.Sprintf("%x", hash)
return token
}
func GenRandomString(n int) string {
const lb = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
b := make([]byte, n)
for i := range b {
t := make([]byte, 1)
rand.Read(t)
b[i] = lb[int(t[0])%len(lb)]
}
return string(b)
}
func GenRandomUint64() uint64 {
buf := make([]byte, 8)
rand.Read(buf)
return binary.LittleEndian.Uint64(buf)
}
func ReadFile(path string) ([]byte, error) {
f, err := os.OpenFile(path, os.O_RDONLY, 0644)
defer f.Close()
if err != nil {
return nil, err
}
b, err := ioutil.ReadAll(f)
if err != nil {
return nil, err
}
return b, nil
}
func GetExecDir() string {
exe_path, _ := os.Executable()
return filepath.Dir(exe_path)
}
func ExecPath(name string) string {
return filepath.Join(GetExecDir(), name)
}

20
vendor/github.com/StackExchange/wmi/LICENSE generated vendored Normal file
View File

@ -0,0 +1,20 @@
The MIT License (MIT)
Copyright (c) 2013 Stack Exchange
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

6
vendor/github.com/StackExchange/wmi/README.md generated vendored Normal file
View File

@ -0,0 +1,6 @@
wmi
===
Package wmi provides a WQL interface to Windows WMI.
Note: It interfaces with WMI on the local machine, therefore it only runs on Windows.

260
vendor/github.com/StackExchange/wmi/swbemservices.go generated vendored Normal file
View File

@ -0,0 +1,260 @@
// +build windows
package wmi
import (
"fmt"
"reflect"
"runtime"
"sync"
"github.com/go-ole/go-ole"
"github.com/go-ole/go-ole/oleutil"
)
// SWbemServices is used to access wmi. See https://msdn.microsoft.com/en-us/library/aa393719(v=vs.85).aspx
type SWbemServices struct {
//TODO: track namespace. Not sure if we can re connect to a different namespace using the same instance
cWMIClient *Client //This could also be an embedded struct, but then we would need to branch on Client vs SWbemServices in the Query method
sWbemLocatorIUnknown *ole.IUnknown
sWbemLocatorIDispatch *ole.IDispatch
queries chan *queryRequest
closeError chan error
lQueryorClose sync.Mutex
}
type queryRequest struct {
query string
dst interface{}
args []interface{}
finished chan error
}
// InitializeSWbemServices will return a new SWbemServices object that can be used to query WMI
func InitializeSWbemServices(c *Client, connectServerArgs ...interface{}) (*SWbemServices, error) {
//fmt.Println("InitializeSWbemServices: Starting")
//TODO: implement connectServerArgs as optional argument for init with connectServer call
s := new(SWbemServices)
s.cWMIClient = c
s.queries = make(chan *queryRequest)
initError := make(chan error)
go s.process(initError)
err, ok := <-initError
if ok {
return nil, err //Send error to caller
}
//fmt.Println("InitializeSWbemServices: Finished")
return s, nil
}
// Close will clear and release all of the SWbemServices resources
func (s *SWbemServices) Close() error {
s.lQueryorClose.Lock()
if s == nil || s.sWbemLocatorIDispatch == nil {
s.lQueryorClose.Unlock()
return fmt.Errorf("SWbemServices is not Initialized")
}
if s.queries == nil {
s.lQueryorClose.Unlock()
return fmt.Errorf("SWbemServices has been closed")
}
//fmt.Println("Close: sending close request")
var result error
ce := make(chan error)
s.closeError = ce //Race condition if multiple callers to close. May need to lock here
close(s.queries) //Tell background to shut things down
s.lQueryorClose.Unlock()
err, ok := <-ce
if ok {
result = err
}
//fmt.Println("Close: finished")
return result
}
func (s *SWbemServices) process(initError chan error) {
//fmt.Println("process: starting background thread initialization")
//All OLE/WMI calls must happen on the same initialized thead, so lock this goroutine
runtime.LockOSThread()
defer runtime.UnlockOSThread()
err := ole.CoInitializeEx(0, ole.COINIT_MULTITHREADED)
if err != nil {
oleCode := err.(*ole.OleError).Code()
if oleCode != ole.S_OK && oleCode != S_FALSE {
initError <- fmt.Errorf("ole.CoInitializeEx error: %v", err)
return
}
}
defer ole.CoUninitialize()
unknown, err := oleutil.CreateObject("WbemScripting.SWbemLocator")
if err != nil {
initError <- fmt.Errorf("CreateObject SWbemLocator error: %v", err)
return
} else if unknown == nil {
initError <- ErrNilCreateObject
return
}
defer unknown.Release()
s.sWbemLocatorIUnknown = unknown
dispatch, err := s.sWbemLocatorIUnknown.QueryInterface(ole.IID_IDispatch)
if err != nil {
initError <- fmt.Errorf("SWbemLocator QueryInterface error: %v", err)
return
}
defer dispatch.Release()
s.sWbemLocatorIDispatch = dispatch
// we can't do the ConnectServer call outside the loop unless we find a way to track and re-init the connectServerArgs
//fmt.Println("process: initialized. closing initError")
close(initError)
//fmt.Println("process: waiting for queries")
for q := range s.queries {
//fmt.Printf("process: new query: len(query)=%d\n", len(q.query))
errQuery := s.queryBackground(q)
//fmt.Println("process: s.queryBackground finished")
if errQuery != nil {
q.finished <- errQuery
}
close(q.finished)
}
//fmt.Println("process: queries channel closed")
s.queries = nil //set channel to nil so we know it is closed
//TODO: I think the Release/Clear calls can panic if things are in a bad state.
//TODO: May need to recover from panics and send error to method caller instead.
close(s.closeError)
}
// Query runs the WQL query using a SWbemServices instance and appends the values to dst.
//
// dst must have type *[]S or *[]*S, for some struct type S. Fields selected in
// the query must have the same name in dst. Supported types are all signed and
// unsigned integers, time.Time, string, bool, or a pointer to one of those.
// Array types are not supported.
//
// By default, the local machine and default namespace are used. These can be
// changed using connectServerArgs. See
// http://msdn.microsoft.com/en-us/library/aa393720.aspx for details.
func (s *SWbemServices) Query(query string, dst interface{}, connectServerArgs ...interface{}) error {
s.lQueryorClose.Lock()
if s == nil || s.sWbemLocatorIDispatch == nil {
s.lQueryorClose.Unlock()
return fmt.Errorf("SWbemServices is not Initialized")
}
if s.queries == nil {
s.lQueryorClose.Unlock()
return fmt.Errorf("SWbemServices has been closed")
}
//fmt.Println("Query: Sending query request")
qr := queryRequest{
query: query,
dst: dst,
args: connectServerArgs,
finished: make(chan error),
}
s.queries <- &qr
s.lQueryorClose.Unlock()
err, ok := <-qr.finished
if ok {
//fmt.Println("Query: Finished with error")
return err //Send error to caller
}
//fmt.Println("Query: Finished")
return nil
}
func (s *SWbemServices) queryBackground(q *queryRequest) error {
if s == nil || s.sWbemLocatorIDispatch == nil {
return fmt.Errorf("SWbemServices is not Initialized")
}
wmi := s.sWbemLocatorIDispatch //Should just rename in the code, but this will help as we break things apart
//fmt.Println("queryBackground: Starting")
dv := reflect.ValueOf(q.dst)
if dv.Kind() != reflect.Ptr || dv.IsNil() {
return ErrInvalidEntityType
}
dv = dv.Elem()
mat, elemType := checkMultiArg(dv)
if mat == multiArgTypeInvalid {
return ErrInvalidEntityType
}
// service is a SWbemServices
serviceRaw, err := oleutil.CallMethod(wmi, "ConnectServer", q.args...)
if err != nil {
return err
}
service := serviceRaw.ToIDispatch()
defer serviceRaw.Clear()
// result is a SWBemObjectSet
resultRaw, err := oleutil.CallMethod(service, "ExecQuery", q.query)
if err != nil {
return err
}
result := resultRaw.ToIDispatch()
defer resultRaw.Clear()
count, err := oleInt64(result, "Count")
if err != nil {
return err
}
enumProperty, err := result.GetProperty("_NewEnum")
if err != nil {
return err
}
defer enumProperty.Clear()
enum, err := enumProperty.ToIUnknown().IEnumVARIANT(ole.IID_IEnumVariant)
if err != nil {
return err
}
if enum == nil {
return fmt.Errorf("can't get IEnumVARIANT, enum is nil")
}
defer enum.Release()
// Initialize a slice with Count capacity
dv.Set(reflect.MakeSlice(dv.Type(), 0, int(count)))
var errFieldMismatch error
for itemRaw, length, err := enum.Next(1); length > 0; itemRaw, length, err = enum.Next(1) {
if err != nil {
return err
}
err := func() error {
// item is a SWbemObject, but really a Win32_Process
item := itemRaw.ToIDispatch()
defer item.Release()
ev := reflect.New(elemType)
if err = s.cWMIClient.loadEntity(ev.Interface(), item); err != nil {
if _, ok := err.(*ErrFieldMismatch); ok {
// We continue loading entities even in the face of field mismatch errors.
// If we encounter any other error, that other error is returned. Otherwise,
// an ErrFieldMismatch is returned.
errFieldMismatch = err
} else {
return err
}
}
if mat != multiArgTypeStructPtr {
ev = ev.Elem()
}
dv.Set(reflect.Append(dv, ev))
return nil
}()
if err != nil {
return err
}
}
//fmt.Println("queryBackground: Finished")
return errFieldMismatch
}

501
vendor/github.com/StackExchange/wmi/wmi.go generated vendored Normal file
View File

@ -0,0 +1,501 @@
// +build windows
/*
Package wmi provides a WQL interface for WMI on Windows.
Example code to print names of running processes:
type Win32_Process struct {
Name string
}
func main() {
var dst []Win32_Process
q := wmi.CreateQuery(&dst, "")
err := wmi.Query(q, &dst)
if err != nil {
log.Fatal(err)
}
for i, v := range dst {
println(i, v.Name)
}
}
*/
package wmi
import (
"bytes"
"errors"
"fmt"
"log"
"os"
"reflect"
"runtime"
"strconv"
"strings"
"sync"
"time"
"github.com/go-ole/go-ole"
"github.com/go-ole/go-ole/oleutil"
)
var l = log.New(os.Stdout, "", log.LstdFlags)
var (
ErrInvalidEntityType = errors.New("wmi: invalid entity type")
// ErrNilCreateObject is the error returned if CreateObject returns nil even
// if the error was nil.
ErrNilCreateObject = errors.New("wmi: create object returned nil")
lock sync.Mutex
)
// S_FALSE is returned by CoInitializeEx if it was already called on this thread.
const S_FALSE = 0x00000001
// QueryNamespace invokes Query with the given namespace on the local machine.
func QueryNamespace(query string, dst interface{}, namespace string) error {
return Query(query, dst, nil, namespace)
}
// Query runs the WQL query and appends the values to dst.
//
// dst must have type *[]S or *[]*S, for some struct type S. Fields selected in
// the query must have the same name in dst. Supported types are all signed and
// unsigned integers, time.Time, string, bool, or a pointer to one of those.
// Array types are not supported.
//
// By default, the local machine and default namespace are used. These can be
// changed using connectServerArgs. See
// http://msdn.microsoft.com/en-us/library/aa393720.aspx for details.
//
// Query is a wrapper around DefaultClient.Query.
func Query(query string, dst interface{}, connectServerArgs ...interface{}) error {
if DefaultClient.SWbemServicesClient == nil {
return DefaultClient.Query(query, dst, connectServerArgs...)
}
return DefaultClient.SWbemServicesClient.Query(query, dst, connectServerArgs...)
}
// A Client is an WMI query client.
//
// Its zero value (DefaultClient) is a usable client.
type Client struct {
// NonePtrZero specifies if nil values for fields which aren't pointers
// should be returned as the field types zero value.
//
// Setting this to true allows stucts without pointer fields to be used
// without the risk failure should a nil value returned from WMI.
NonePtrZero bool
// PtrNil specifies if nil values for pointer fields should be returned
// as nil.
//
// Setting this to true will set pointer fields to nil where WMI
// returned nil, otherwise the types zero value will be returned.
PtrNil bool
// AllowMissingFields specifies that struct fields not present in the
// query result should not result in an error.
//
// Setting this to true allows custom queries to be used with full
// struct definitions instead of having to define multiple structs.
AllowMissingFields bool
// SWbemServiceClient is an optional SWbemServices object that can be
// initialized and then reused across multiple queries. If it is null
// then the method will initialize a new temporary client each time.
SWbemServicesClient *SWbemServices
}
// DefaultClient is the default Client and is used by Query, QueryNamespace
var DefaultClient = &Client{}
// Query runs the WQL query and appends the values to dst.
//
// dst must have type *[]S or *[]*S, for some struct type S. Fields selected in
// the query must have the same name in dst. Supported types are all signed and
// unsigned integers, time.Time, string, bool, or a pointer to one of those.
// Array types are not supported.
//
// By default, the local machine and default namespace are used. These can be
// changed using connectServerArgs. See
// http://msdn.microsoft.com/en-us/library/aa393720.aspx for details.
func (c *Client) Query(query string, dst interface{}, connectServerArgs ...interface{}) error {
dv := reflect.ValueOf(dst)
if dv.Kind() != reflect.Ptr || dv.IsNil() {
return ErrInvalidEntityType
}
dv = dv.Elem()
mat, elemType := checkMultiArg(dv)
if mat == multiArgTypeInvalid {
return ErrInvalidEntityType
}
lock.Lock()
defer lock.Unlock()
runtime.LockOSThread()
defer runtime.UnlockOSThread()
err := ole.CoInitializeEx(0, ole.COINIT_MULTITHREADED)
if err != nil {
oleCode := err.(*ole.OleError).Code()
if oleCode != ole.S_OK && oleCode != S_FALSE {
return err
}
}
defer ole.CoUninitialize()
unknown, err := oleutil.CreateObject("WbemScripting.SWbemLocator")
if err != nil {
return err
} else if unknown == nil {
return ErrNilCreateObject
}
defer unknown.Release()
wmi, err := unknown.QueryInterface(ole.IID_IDispatch)
if err != nil {
return err
}
defer wmi.Release()
// service is a SWbemServices
serviceRaw, err := oleutil.CallMethod(wmi, "ConnectServer", connectServerArgs...)
if err != nil {
return err
}
service := serviceRaw.ToIDispatch()
defer serviceRaw.Clear()
// result is a SWBemObjectSet
resultRaw, err := oleutil.CallMethod(service, "ExecQuery", query)
if err != nil {
return err
}
result := resultRaw.ToIDispatch()
defer resultRaw.Clear()
count, err := oleInt64(result, "Count")
if err != nil {
return err
}
enumProperty, err := result.GetProperty("_NewEnum")
if err != nil {
return err
}
defer enumProperty.Clear()
enum, err := enumProperty.ToIUnknown().IEnumVARIANT(ole.IID_IEnumVariant)
if err != nil {
return err
}
if enum == nil {
return fmt.Errorf("can't get IEnumVARIANT, enum is nil")
}
defer enum.Release()
// Initialize a slice with Count capacity
dv.Set(reflect.MakeSlice(dv.Type(), 0, int(count)))
var errFieldMismatch error
for itemRaw, length, err := enum.Next(1); length > 0; itemRaw, length, err = enum.Next(1) {
if err != nil {
return err
}
err := func() error {
// item is a SWbemObject, but really a Win32_Process
item := itemRaw.ToIDispatch()
defer item.Release()
ev := reflect.New(elemType)
if err = c.loadEntity(ev.Interface(), item); err != nil {
if _, ok := err.(*ErrFieldMismatch); ok {
// We continue loading entities even in the face of field mismatch errors.
// If we encounter any other error, that other error is returned. Otherwise,
// an ErrFieldMismatch is returned.
errFieldMismatch = err
} else {
return err
}
}
if mat != multiArgTypeStructPtr {
ev = ev.Elem()
}
dv.Set(reflect.Append(dv, ev))
return nil
}()
if err != nil {
return err
}
}
return errFieldMismatch
}
// ErrFieldMismatch is returned when a field is to be loaded into a different
// type than the one it was stored from, or when a field is missing or
// unexported in the destination struct.
// StructType is the type of the struct pointed to by the destination argument.
type ErrFieldMismatch struct {
StructType reflect.Type
FieldName string
Reason string
}
func (e *ErrFieldMismatch) Error() string {
return fmt.Sprintf("wmi: cannot load field %q into a %q: %s",
e.FieldName, e.StructType, e.Reason)
}
var timeType = reflect.TypeOf(time.Time{})
// loadEntity loads a SWbemObject into a struct pointer.
func (c *Client) loadEntity(dst interface{}, src *ole.IDispatch) (errFieldMismatch error) {
v := reflect.ValueOf(dst).Elem()
for i := 0; i < v.NumField(); i++ {
f := v.Field(i)
of := f
isPtr := f.Kind() == reflect.Ptr
if isPtr {
ptr := reflect.New(f.Type().Elem())
f.Set(ptr)
f = f.Elem()
}
n := v.Type().Field(i).Name
if !f.CanSet() {
return &ErrFieldMismatch{
StructType: of.Type(),
FieldName: n,
Reason: "CanSet() is false",
}
}
prop, err := oleutil.GetProperty(src, n)
if err != nil {
if !c.AllowMissingFields {
errFieldMismatch = &ErrFieldMismatch{
StructType: of.Type(),
FieldName: n,
Reason: "no such struct field",
}
}
continue
}
defer prop.Clear()
if prop.VT == 0x1 { //VT_NULL
continue
}
switch val := prop.Value().(type) {
case int8, int16, int32, int64, int:
v := reflect.ValueOf(val).Int()
switch f.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
f.SetInt(v)
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
f.SetUint(uint64(v))
default:
return &ErrFieldMismatch{
StructType: of.Type(),
FieldName: n,
Reason: "not an integer class",
}
}
case uint8, uint16, uint32, uint64:
v := reflect.ValueOf(val).Uint()
switch f.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
f.SetInt(int64(v))
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
f.SetUint(v)
default:
return &ErrFieldMismatch{
StructType: of.Type(),
FieldName: n,
Reason: "not an integer class",
}
}
case string:
switch f.Kind() {
case reflect.String:
f.SetString(val)
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
iv, err := strconv.ParseInt(val, 10, 64)
if err != nil {
return err
}
f.SetInt(iv)
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
uv, err := strconv.ParseUint(val, 10, 64)
if err != nil {
return err
}
f.SetUint(uv)
case reflect.Struct:
switch f.Type() {
case timeType:
if len(val) == 25 {
mins, err := strconv.Atoi(val[22:])
if err != nil {
return err
}
val = val[:22] + fmt.Sprintf("%02d%02d", mins/60, mins%60)
}
t, err := time.Parse("20060102150405.000000-0700", val)
if err != nil {
return err
}
f.Set(reflect.ValueOf(t))
}
}
case bool:
switch f.Kind() {
case reflect.Bool:
f.SetBool(val)
default:
return &ErrFieldMismatch{
StructType: of.Type(),
FieldName: n,
Reason: "not a bool",
}
}
case float32:
switch f.Kind() {
case reflect.Float32:
f.SetFloat(float64(val))
default:
return &ErrFieldMismatch{
StructType: of.Type(),
FieldName: n,
Reason: "not a Float32",
}
}
default:
if f.Kind() == reflect.Slice {
switch f.Type().Elem().Kind() {
case reflect.String:
safeArray := prop.ToArray()
if safeArray != nil {
arr := safeArray.ToValueArray()
fArr := reflect.MakeSlice(f.Type(), len(arr), len(arr))
for i, v := range arr {
s := fArr.Index(i)
s.SetString(v.(string))
}
f.Set(fArr)
}
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
safeArray := prop.ToArray()
if safeArray != nil {
arr := safeArray.ToValueArray()
fArr := reflect.MakeSlice(f.Type(), len(arr), len(arr))
for i, v := range arr {
s := fArr.Index(i)
s.SetUint(reflect.ValueOf(v).Uint())
}
f.Set(fArr)
}
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
safeArray := prop.ToArray()
if safeArray != nil {
arr := safeArray.ToValueArray()
fArr := reflect.MakeSlice(f.Type(), len(arr), len(arr))
for i, v := range arr {
s := fArr.Index(i)
s.SetInt(reflect.ValueOf(v).Int())
}
f.Set(fArr)
}
default:
return &ErrFieldMismatch{
StructType: of.Type(),
FieldName: n,
Reason: fmt.Sprintf("unsupported slice type (%T)", val),
}
}
} else {
typeof := reflect.TypeOf(val)
if typeof == nil && (isPtr || c.NonePtrZero) {
if (isPtr && c.PtrNil) || (!isPtr && c.NonePtrZero) {
of.Set(reflect.Zero(of.Type()))
}
break
}
return &ErrFieldMismatch{
StructType: of.Type(),
FieldName: n,
Reason: fmt.Sprintf("unsupported type (%T)", val),
}
}
}
}
return errFieldMismatch
}
type multiArgType int
const (
multiArgTypeInvalid multiArgType = iota
multiArgTypeStruct
multiArgTypeStructPtr
)
// checkMultiArg checks that v has type []S, []*S for some struct type S.
//
// It returns what category the slice's elements are, and the reflect.Type
// that represents S.
func checkMultiArg(v reflect.Value) (m multiArgType, elemType reflect.Type) {
if v.Kind() != reflect.Slice {
return multiArgTypeInvalid, nil
}
elemType = v.Type().Elem()
switch elemType.Kind() {
case reflect.Struct:
return multiArgTypeStruct, elemType
case reflect.Ptr:
elemType = elemType.Elem()
if elemType.Kind() == reflect.Struct {
return multiArgTypeStructPtr, elemType
}
}
return multiArgTypeInvalid, nil
}
func oleInt64(item *ole.IDispatch, prop string) (int64, error) {
v, err := oleutil.GetProperty(item, prop)
if err != nil {
return 0, err
}
defer v.Clear()
i := int64(v.Val)
return i, nil
}
// CreateQuery returns a WQL query string that queries all columns of src. where
// is an optional string that is appended to the query, to be used with WHERE
// clauses. In such a case, the "WHERE" string should appear at the beginning.
func CreateQuery(src interface{}, where string) string {
var b bytes.Buffer
b.WriteString("SELECT ")
s := reflect.Indirect(reflect.ValueOf(src))
t := s.Type()
if s.Kind() == reflect.Slice {
t = t.Elem()
}
if t.Kind() != reflect.Struct {
return ""
}
var fields []string
for i := 0; i < t.NumField(); i++ {
fields = append(fields, t.Field(i).Name)
}
b.WriteString(strings.Join(fields, ", "))
b.WriteString(" FROM ")
b.WriteString(t.Name())
b.WriteString(" " + where)
return b.String()
}

29
vendor/github.com/asdine/storm/.gitignore generated vendored Normal file
View File

@ -0,0 +1,29 @@
# IDE
.idea/
.vscode/
*.iml
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*.test
*.prof

14
vendor/github.com/asdine/storm/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,14 @@
language: go
before_install:
- go get github.com/stretchr/testify
go:
- 1.7
- 1.8
- 1.9
- "1.10"
- tip
script:
- go test -v ./...

21
vendor/github.com/asdine/storm/LICENSE generated vendored Normal file
View File

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) [2017] [Asdine El Hrychy]
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

642
vendor/github.com/asdine/storm/README.md generated vendored Normal file
View File

@ -0,0 +1,642 @@
# Storm
[![Build Status](https://travis-ci.org/asdine/storm.svg)](https://travis-ci.org/asdine/storm)
[![GoDoc](https://godoc.org/github.com/asdine/storm?status.svg)](https://godoc.org/github.com/asdine/storm)
[![Go Report Card](https://goreportcard.com/badge/github.com/asdine/storm)](https://goreportcard.com/report/github.com/asdine/storm)
Storm is a simple and powerful toolkit for [BoltDB](https://github.com/coreos/bbolt). Basically, Storm provides indexes, a wide range of methods to store and fetch data, an advanced query system, and much more.
In addition to the examples below, see also the [examples in the GoDoc](https://godoc.org/github.com/asdine/storm#pkg-examples).
## Table of Contents
- [Getting Started](#getting-started)
- [Import Storm](#import-storm)
- [Open a database](#open-a-database)
- [Simple CRUD system](#simple-crud-system)
- [Declare your structures](#declare-your-structures)
- [Save your object](#save-your-object)
- [Auto Increment](#auto-increment)
- [Simple queries](#simple-queries)
- [Fetch one object](#fetch-one-object)
- [Fetch multiple objects](#fetch-multiple-objects)
- [Fetch all objects](#fetch-all-objects)
- [Fetch all objects sorted by index](#fetch-all-objects-sorted-by-index)
- [Fetch a range of objects](#fetch-a-range-of-objects)
- [Fetch objects by prefix](#fetch-objects-by-prefix)
- [Skip, Limit and Reverse](#skip-limit-and-reverse)
- [Delete an object](#delete-an-object)
- [Update an object](#update-an-object)
- [Initialize buckets and indexes before saving an object](#initialize-buckets-and-indexes-before-saving-an-object)
- [Drop a bucket](#drop-a-bucket)
- [Re-index a bucket](#re-index-a-bucket)
- [Advanced queries](#advanced-queries)
- [Transactions](#transactions)
- [Options](#options)
- [BoltOptions](#boltoptions)
- [MarshalUnmarshaler](#marshalunmarshaler)
- [Provided Codecs](#provided-codecs)
- [Use existing Bolt connection](#use-existing-bolt-connection)
- [Batch mode](#batch-mode)
- [Nodes and nested buckets](#nodes-and-nested-buckets)
- [Node options](#node-options)
- [Simple Key/Value store](#simple-keyvalue-store)
- [BoltDB](#boltdb)
- [License](#license)
- [Credits](#credits)
## Getting Started
```bash
go get -u github.com/asdine/storm
```
## Import Storm
```go
import "github.com/asdine/storm"
```
## Open a database
Quick way of opening a database
```go
db, err := storm.Open("my.db")
defer db.Close()
```
`Open` can receive multiple options to customize the way it behaves. See [Options](#options) below
## Simple CRUD system
### Declare your structures
```go
type User struct {
ID int // primary key
Group string `storm:"index"` // this field will be indexed
Email string `storm:"unique"` // this field will be indexed with a unique constraint
Name string // this field will not be indexed
Age int `storm:"index"`
}
```
The primary key can be of any type as long as it is not a zero value. Storm will search for the tag `id`, if not present Storm will search for a field named `ID`.
```go
type User struct {
ThePrimaryKey string `storm:"id"`// primary key
Group string `storm:"index"` // this field will be indexed
Email string `storm:"unique"` // this field will be indexed with a unique constraint
Name string // this field will not be indexed
}
```
Storm handles tags in nested structures with the `inline` tag
```go
type Base struct {
Ident bson.ObjectId `storm:"id"`
}
type User struct {
Base `storm:"inline"`
Group string `storm:"index"`
Email string `storm:"unique"`
Name string
CreatedAt time.Time `storm:"index"`
}
```
### Save your object
```go
user := User{
ID: 10,
Group: "staff",
Email: "john@provider.com",
Name: "John",
Age: 21,
CreatedAt: time.Now(),
}
err := db.Save(&user)
// err == nil
user.ID++
err = db.Save(&user)
// err == storm.ErrAlreadyExists
```
That's it.
`Save` creates or updates all the required indexes and buckets, checks the unique constraints and saves the object to the store.
#### Auto Increment
Storm can auto increment integer values so you don't have to worry about that when saving your objects. Also, the new value is automatically inserted in your field.
```go
type Product struct {
Pk int `storm:"id,increment"` // primary key with auto increment
Name string
IntegerField uint64 `storm:"increment"`
IndexedIntegerField uint32 `storm:"index,increment"`
UniqueIntegerField int16 `storm:"unique,increment=100"` // the starting value can be set
}
p := Product{Name: "Vaccum Cleaner"}
fmt.Println(p.Pk)
fmt.Println(p.IntegerField)
fmt.Println(p.IndexedIntegerField)
fmt.Println(p.UniqueIntegerField)
// 0
// 0
// 0
// 0
_ = db.Save(&p)
fmt.Println(p.Pk)
fmt.Println(p.IntegerField)
fmt.Println(p.IndexedIntegerField)
fmt.Println(p.UniqueIntegerField)
// 1
// 1
// 1
// 100
```
### Simple queries
Any object can be fetched, indexed or not. Storm uses indexes when available, otherwise it uses the [query system](#advanced-queries).
#### Fetch one object
```go
var user User
err := db.One("Email", "john@provider.com", &user)
// err == nil
err = db.One("Name", "John", &user)
// err == nil
err = db.One("Name", "Jack", &user)
// err == storm.ErrNotFound
```
#### Fetch multiple objects
```go
var users []User
err := db.Find("Group", "staff", &users)
```
#### Fetch all objects
```go
var users []User
err := db.All(&users)
```
#### Fetch all objects sorted by index
```go
var users []User
err := db.AllByIndex("CreatedAt", &users)
```
#### Fetch a range of objects
```go
var users []User
err := db.Range("Age", 10, 21, &users)
```
#### Fetch objects by prefix
```go
var users []User
err := db.Prefix("Name", "Jo", &users)
```
#### Skip, Limit and Reverse
```go
var users []User
err := db.Find("Group", "staff", &users, storm.Skip(10))
err = db.Find("Group", "staff", &users, storm.Limit(10))
err = db.Find("Group", "staff", &users, storm.Reverse())
err = db.Find("Group", "staff", &users, storm.Limit(10), storm.Skip(10), storm.Reverse())
err = db.All(&users, storm.Limit(10), storm.Skip(10), storm.Reverse())
err = db.AllByIndex("CreatedAt", &users, storm.Limit(10), storm.Skip(10), storm.Reverse())
err = db.Range("Age", 10, 21, &users, storm.Limit(10), storm.Skip(10), storm.Reverse())
```
#### Delete an object
```go
err := db.DeleteStruct(&user)
```
#### Update an object
```go
// Update multiple fields
err := db.Update(&User{ID: 10, Name: "Jack", Age: 45})
// Update a single field
err := db.UpdateField(&User{ID: 10}, "Age", 0)
```
#### Initialize buckets and indexes before saving an object
```go
err := db.Init(&User{})
```
Useful when starting your application
#### Drop a bucket
Using the struct
```go
err := db.Drop(&User)
```
Using the bucket name
```go
err := db.Drop("User")
```
#### Re-index a bucket
```go
err := db.ReIndex(&User{})
```
Useful when the structure has changed
### Advanced queries
For more complex queries, you can use the `Select` method.
`Select` takes any number of [`Matcher`](https://godoc.org/github.com/asdine/storm/q#Matcher) from the [`q`](https://godoc.org/github.com/asdine/storm/q) package.
Here are some common Matchers:
```go
// Equality
q.Eq("Name", John)
// Strictly greater than
q.Gt("Age", 7)
// Lesser than or equal to
q.Lte("Age", 77)
// Regex with name that starts with the letter D
q.Re("Name", "^D")
// In the given slice of values
q.In("Group", []string{"Staff", "Admin"})
// Comparing fields
q.EqF("FieldName", "SecondFieldName")
q.LtF("FieldName", "SecondFieldName")
q.GtF("FieldName", "SecondFieldName")
q.LteF("FieldName", "SecondFieldName")
q.GteF("FieldName", "SecondFieldName")
```
Matchers can also be combined with `And`, `Or` and `Not`:
```go
// Match if all match
q.And(
q.Gt("Age", 7),
q.Re("Name", "^D")
)
// Match if one matches
q.Or(
q.Re("Name", "^A"),
q.Not(
q.Re("Name", "^B")
),
q.Re("Name", "^C"),
q.In("Group", []string{"Staff", "Admin"}),
q.And(
q.StrictEq("Password", []byte(password)),
q.Eq("Registered", true)
)
)
```
You can find the complete list in the [documentation](https://godoc.org/github.com/asdine/storm/q#Matcher).
`Select` takes any number of matchers and wraps them into a `q.And()` so it's not necessary to specify it. It returns a [`Query`](https://godoc.org/github.com/asdine/storm#Query) type.
```go
query := db.Select(q.Gte("Age", 7), q.Lte("Age", 77))
```
The `Query` type contains methods to filter and order the records.
```go
// Limit
query = query.Limit(10)
// Skip
query = query.Skip(20)
// Calls can also be chained
query = query.Limit(10).Skip(20).OrderBy("Age").Reverse()
```
But also to specify how to fetch them.
```go
var users []User
err = query.Find(&users)
var user User
err = query.First(&user)
```
Examples with `Select`:
```go
// Find all users with an ID between 10 and 100
err = db.Select(q.Gte("ID", 10), q.Lte("ID", 100)).Find(&users)
// Nested matchers
err = db.Select(q.Or(
q.Gt("ID", 50),
q.Lt("Age", 21),
q.And(
q.Eq("Group", "admin"),
q.Gte("Age", 21),
),
)).Find(&users)
query := db.Select(q.Gte("ID", 10), q.Lte("ID", 100)).Limit(10).Skip(5).Reverse().OrderBy("Age", "Name")
// Find multiple records
err = query.Find(&users)
// or
err = db.Select(q.Gte("ID", 10), q.Lte("ID", 100)).Limit(10).Skip(5).Reverse().OrderBy("Age", "Name").Find(&users)
// Find first record
err = query.First(&user)
// or
err = db.Select(q.Gte("ID", 10), q.Lte("ID", 100)).Limit(10).Skip(5).Reverse().OrderBy("Age", "Name").First(&user)
// Delete all matching records
err = query.Delete(new(User))
// Fetching records one by one (useful when the bucket contains a lot of records)
query = db.Select(q.Gte("ID", 10),q.Lte("ID", 100)).OrderBy("Age", "Name")
err = query.Each(new(User), func(record interface{}) error) {
u := record.(*User)
...
return nil
}
```
See the [documentation](https://godoc.org/github.com/asdine/storm#Query) for a complete list of methods.
### Transactions
```go
tx, err := db.Begin(true)
if err != nil {
return err
}
defer tx.Rollback()
accountA.Amount -= 100
accountB.Amount += 100
err = tx.Save(accountA)
if err != nil {
return err
}
err = tx.Save(accountB)
if err != nil {
return err
}
return tx.Commit()
```
### Options
Storm options are functions that can be passed when constructing you Storm instance. You can pass it any number of options.
#### BoltOptions
By default, Storm opens a database with the mode `0600` and a timeout of one second.
You can change this behavior by using `BoltOptions`
```go
db, err := storm.Open("my.db", storm.BoltOptions(0600, &bolt.Options{Timeout: 1 * time.Second}))
```
#### MarshalUnmarshaler
To store the data in BoltDB, Storm marshals it in JSON by default. If you wish to change this behavior you can pass a codec that implements [`codec.MarshalUnmarshaler`](https://godoc.org/github.com/asdine/storm/codec#MarshalUnmarshaler) via the [`storm.Codec`](https://godoc.org/github.com/asdine/storm#Codec) option:
```go
db := storm.Open("my.db", storm.Codec(myCodec))
```
##### Provided Codecs
You can easily implement your own `MarshalUnmarshaler`, but Storm comes with built-in support for [JSON](https://godoc.org/github.com/asdine/storm/codec/json) (default), [GOB](https://godoc.org/github.com/asdine/storm/codec/gob), [Sereal](https://godoc.org/github.com/asdine/storm/codec/sereal), [Protocol Buffers](https://godoc.org/github.com/asdine/storm/codec/protobuf) and [MessagePack](https://godoc.org/github.com/asdine/storm/codec/msgpack).
These can be used by importing the relevant package and use that codec to configure Storm. The example below shows all variants (without proper error handling):
```go
import (
"github.com/asdine/storm"
"github.com/asdine/storm/codec/gob"
"github.com/asdine/storm/codec/json"
"github.com/asdine/storm/codec/sereal"
"github.com/asdine/storm/codec/protobuf"
"github.com/asdine/storm/codec/msgpack"
)
var gobDb, _ = storm.Open("gob.db", storm.Codec(gob.Codec))
var jsonDb, _ = storm.Open("json.db", storm.Codec(json.Codec))
var serealDb, _ = storm.Open("sereal.db", storm.Codec(sereal.Codec))
var protobufDb, _ = storm.Open("protobuf.db", storm.Codec(protobuf.Codec))
var msgpackDb, _ = storm.Open("msgpack.db", storm.Codec(msgpack.Codec))
```
**Tip**: Adding Storm tags to generated Protobuf files can be tricky. A good solution is to use [this tool](https://github.com/favadi/protoc-go-inject-tag) to inject the tags during the compilation.
#### Use existing Bolt connection
You can use an existing connection and pass it to Storm
```go
bDB, _ := bolt.Open(filepath.Join(dir, "bolt.db"), 0600, &bolt.Options{Timeout: 10 * time.Second})
db := storm.Open("my.db", storm.UseDB(bDB))
```
#### Batch mode
Batch mode can be enabled to speed up concurrent writes (see [Batch read-write transactions](https://github.com/coreos/bbolt#batch-read-write-transactions))
```go
db := storm.Open("my.db", storm.Batch())
```
## Nodes and nested buckets
Storm takes advantage of BoltDB nested buckets feature by using `storm.Node`.
A `storm.Node` is the underlying object used by `storm.DB` to manipulate a bucket.
To create a nested bucket and use the same API as `storm.DB`, you can use the `DB.From` method.
```go
repo := db.From("repo")
err := repo.Save(&Issue{
Title: "I want more features",
Author: user.ID,
})
err = repo.Save(newRelease("0.10"))
var issues []Issue
err = repo.Find("Author", user.ID, &issues)
var release Release
err = repo.One("Tag", "0.10", &release)
```
You can also chain the nodes to create a hierarchy
```go
chars := db.From("characters")
heroes := chars.From("heroes")
enemies := chars.From("enemies")
items := db.From("items")
potions := items.From("consumables").From("medicine").From("potions")
```
You can even pass the entire hierarchy as arguments to `From`:
```go
privateNotes := db.From("notes", "private")
workNotes := db.From("notes", "work")
```
### Node options
A Node can also be configured. Activating an option on a Node creates a copy, so a Node is always thread-safe.
```go
n := db.From("my-node")
```
Give a bolt.Tx transaction to the Node
```go
n = n.WithTransaction(tx)
```
Enable batch mode
```go
n = n.WithBatch(true)
```
Use a Codec
```go
n = n.WithCodec(gob.Codec)
```
## Simple Key/Value store
Storm can be used as a simple, robust, key/value store that can store anything.
The key and the value can be of any type as long as the key is not a zero value.
Saving data :
```go
db.Set("logs", time.Now(), "I'm eating my breakfast man")
db.Set("sessions", bson.NewObjectId(), &someUser)
db.Set("weird storage", "754-3010", map[string]interface{}{
"hair": "blonde",
"likes": []string{"cheese", "star wars"},
})
```
Fetching data :
```go
user := User{}
db.Get("sessions", someObjectId, &user)
var details map[string]interface{}
db.Get("weird storage", "754-3010", &details)
db.Get("sessions", someObjectId, &details)
```
Deleting data :
```go
db.Delete("sessions", someObjectId)
db.Delete("weird storage", "754-3010")
```
You can find other useful methods in the [documentation](https://godoc.org/github.com/asdine/storm#KeyValueStore).
## BoltDB
BoltDB is still easily accessible and can be used as usual
```go
db.Bolt.View(func(tx *bolt.Tx) error {
bucket := tx.Bucket([]byte("my bucket"))
val := bucket.Get([]byte("any id"))
fmt.Println(string(val))
return nil
})
```
A transaction can be also be passed to Storm
```go
db.Bolt.Update(func(tx *bolt.Tx) error {
...
dbx := db.WithTransaction(tx)
err = dbx.Save(&user)
...
return nil
})
```
## License
MIT
## Credits
- [Asdine El Hrychy](https://github.com/asdine)
- [Bjørn Erik Pedersen](https://github.com/bep)

47
vendor/github.com/asdine/storm/bucket.go generated vendored Normal file
View File

@ -0,0 +1,47 @@
package storm
import bolt "go.etcd.io/bbolt"
// CreateBucketIfNotExists creates the bucket below the current node if it doesn't
// already exist.
func (n *node) CreateBucketIfNotExists(tx *bolt.Tx, bucket string) (*bolt.Bucket, error) {
var b *bolt.Bucket
var err error
bucketNames := append(n.rootBucket, bucket)
for _, bucketName := range bucketNames {
if b != nil {
if b, err = b.CreateBucketIfNotExists([]byte(bucketName)); err != nil {
return nil, err
}
} else {
if b, err = tx.CreateBucketIfNotExists([]byte(bucketName)); err != nil {
return nil, err
}
}
}
return b, nil
}
// GetBucket returns the given bucket below the current node.
func (n *node) GetBucket(tx *bolt.Tx, children ...string) *bolt.Bucket {
var b *bolt.Bucket
bucketNames := append(n.rootBucket, children...)
for _, bucketName := range bucketNames {
if b != nil {
if b = b.Bucket([]byte(bucketName)); b == nil {
return nil
}
} else {
if b = tx.Bucket([]byte(bucketName)); b == nil {
return nil
}
}
}
return b
}

1
vendor/github.com/asdine/storm/codec/.gitignore generated vendored Normal file
View File

@ -0,0 +1 @@
*.db

11
vendor/github.com/asdine/storm/codec/codec.go generated vendored Normal file
View File

@ -0,0 +1,11 @@
// Package codec contains sub-packages with different codecs that can be used
// to encode and decode entities in Storm.
package codec
// MarshalUnmarshaler represents a codec used to marshal and unmarshal entities.
type MarshalUnmarshaler interface {
Marshal(v interface{}) ([]byte, error)
Unmarshal(b []byte, v interface{}) error
// name of this codec
Name() string
}

25
vendor/github.com/asdine/storm/codec/json/json.go generated vendored Normal file
View File

@ -0,0 +1,25 @@
// Package json contains a codec to encode and decode entities in JSON format
package json
import (
"encoding/json"
)
const name = "json"
// Codec that encodes to and decodes from JSON.
var Codec = new(jsonCodec)
type jsonCodec int
func (j jsonCodec) Marshal(v interface{}) ([]byte, error) {
return json.Marshal(v)
}
func (j jsonCodec) Unmarshal(b []byte, v interface{}) error {
return json.Unmarshal(b, v)
}
func (j jsonCodec) Name() string {
return name
}

51
vendor/github.com/asdine/storm/errors.go generated vendored Normal file
View File

@ -0,0 +1,51 @@
package storm
import "errors"
// Errors
var (
// ErrNoID is returned when no ID field or id tag is found in the struct.
ErrNoID = errors.New("missing struct tag id or ID field")
// ErrZeroID is returned when the ID field is a zero value.
ErrZeroID = errors.New("id field must not be a zero value")
// ErrBadType is returned when a method receives an unexpected value type.
ErrBadType = errors.New("provided data must be a struct or a pointer to struct")
// ErrAlreadyExists is returned uses when trying to set an existing value on a field that has a unique index.
ErrAlreadyExists = errors.New("already exists")
// ErrNilParam is returned when the specified param is expected to be not nil.
ErrNilParam = errors.New("param must not be nil")
// ErrUnknownTag is returned when an unexpected tag is specified.
ErrUnknownTag = errors.New("unknown tag")
// ErrIdxNotFound is returned when the specified index is not found.
ErrIdxNotFound = errors.New("index not found")
// ErrSlicePtrNeeded is returned when an unexpected value is given, instead of a pointer to slice.
ErrSlicePtrNeeded = errors.New("provided target must be a pointer to slice")
// ErrSlicePtrNeeded is returned when an unexpected value is given, instead of a pointer to struct.
ErrStructPtrNeeded = errors.New("provided target must be a pointer to struct")
// ErrSlicePtrNeeded is returned when an unexpected value is given, instead of a pointer.
ErrPtrNeeded = errors.New("provided target must be a pointer to a valid variable")
// ErrNoName is returned when the specified struct has no name.
ErrNoName = errors.New("provided target must have a name")
// ErrNotFound is returned when the specified record is not saved in the bucket.
ErrNotFound = errors.New("not found")
// ErrNotInTransaction is returned when trying to rollback or commit when not in transaction.
ErrNotInTransaction = errors.New("not in transaction")
// ErrIncompatibleValue is returned when trying to set a value with a different type than the chosen field
ErrIncompatibleValue = errors.New("incompatible value")
// ErrDifferentCodec is returned when using a codec different than the first codec used with the bucket.
ErrDifferentCodec = errors.New("the selected codec is incompatible with this bucket")
)

226
vendor/github.com/asdine/storm/extract.go generated vendored Normal file
View File

@ -0,0 +1,226 @@
package storm
import (
"fmt"
"reflect"
"strconv"
"strings"
"github.com/asdine/storm/index"
bolt "go.etcd.io/bbolt"
)
// Storm tags
const (
tagID = "id"
tagIdx = "index"
tagUniqueIdx = "unique"
tagInline = "inline"
tagIncrement = "increment"
indexPrefix = "__storm_index_"
)
type fieldConfig struct {
Name string
Index string
IsZero bool
IsID bool
Increment bool
IncrementStart int64
IsInteger bool
Value *reflect.Value
ForceUpdate bool
}
// structConfig is a structure gathering all the relevant informations about a model
type structConfig struct {
Name string
Fields map[string]*fieldConfig
ID *fieldConfig
}
func extract(s *reflect.Value, mi ...*structConfig) (*structConfig, error) {
if s.Kind() == reflect.Ptr {
e := s.Elem()
s = &e
}
if s.Kind() != reflect.Struct {
return nil, ErrBadType
}
typ := s.Type()
var child bool
var m *structConfig
if len(mi) > 0 {
m = mi[0]
child = true
} else {
m = &structConfig{}
m.Fields = make(map[string]*fieldConfig)
}
if m.Name == "" {
m.Name = typ.Name()
}
numFields := s.NumField()
for i := 0; i < numFields; i++ {
field := typ.Field(i)
value := s.Field(i)
if field.PkgPath != "" {
continue
}
err := extractField(&value, &field, m, child)
if err != nil {
return nil, err
}
}
if child {
return m, nil
}
if m.ID == nil {
return nil, ErrNoID
}
if m.Name == "" {
return nil, ErrNoName
}
return m, nil
}
func extractField(value *reflect.Value, field *reflect.StructField, m *structConfig, isChild bool) error {
var f *fieldConfig
var err error
tag := field.Tag.Get("storm")
if tag != "" {
f = &fieldConfig{
Name: field.Name,
IsZero: isZero(value),
IsInteger: isInteger(value),
Value: value,
IncrementStart: 1,
}
tags := strings.Split(tag, ",")
for _, tag := range tags {
switch tag {
case "id":
f.IsID = true
f.Index = tagUniqueIdx
case tagUniqueIdx, tagIdx:
f.Index = tag
case tagInline:
if value.Kind() == reflect.Ptr {
e := value.Elem()
value = &e
}
if value.Kind() == reflect.Struct {
a := value.Addr()
_, err := extract(&a, m)
if err != nil {
return err
}
}
// we don't need to save this field
return nil
default:
if strings.HasPrefix(tag, tagIncrement) {
f.Increment = true
parts := strings.Split(tag, "=")
if parts[0] != tagIncrement {
return ErrUnknownTag
}
if len(parts) > 1 {
f.IncrementStart, err = strconv.ParseInt(parts[1], 0, 64)
if err != nil {
return err
}
}
} else {
return ErrUnknownTag
}
}
}
if _, ok := m.Fields[f.Name]; !ok || !isChild {
m.Fields[f.Name] = f
}
}
if m.ID == nil && f != nil && f.IsID {
m.ID = f
}
// the field is named ID and no ID field has been detected before
if m.ID == nil && field.Name == "ID" {
if f == nil {
f = &fieldConfig{
Index: tagUniqueIdx,
Name: field.Name,
IsZero: isZero(value),
IsInteger: isInteger(value),
IsID: true,
Value: value,
IncrementStart: 1,
}
m.Fields[field.Name] = f
}
m.ID = f
}
return nil
}
func extractSingleField(ref *reflect.Value, fieldName string) (*structConfig, error) {
var cfg structConfig
cfg.Fields = make(map[string]*fieldConfig)
f, ok := ref.Type().FieldByName(fieldName)
if !ok || f.PkgPath != "" {
return nil, fmt.Errorf("field %s not found", fieldName)
}
v := ref.FieldByName(fieldName)
err := extractField(&v, &f, &cfg, false)
if err != nil {
return nil, err
}
return &cfg, nil
}
func getIndex(bucket *bolt.Bucket, idxKind string, fieldName string) (index.Index, error) {
var idx index.Index
var err error
switch idxKind {
case tagUniqueIdx:
idx, err = index.NewUniqueIndex(bucket, []byte(indexPrefix+fieldName))
case tagIdx:
idx, err = index.NewListIndex(bucket, []byte(indexPrefix+fieldName))
default:
err = ErrIdxNotFound
}
return idx, err
}
func isZero(v *reflect.Value) bool {
zero := reflect.Zero(v.Type()).Interface()
current := v.Interface()
return reflect.DeepEqual(current, zero)
}
func isInteger(v *reflect.Value) bool {
kind := v.Kind()
return v != nil && kind >= reflect.Int && kind <= reflect.Uint64
}

499
vendor/github.com/asdine/storm/finder.go generated vendored Normal file
View File

@ -0,0 +1,499 @@
package storm
import (
"fmt"
"reflect"
"github.com/asdine/storm/index"
"github.com/asdine/storm/q"
bolt "go.etcd.io/bbolt"
)
// A Finder can fetch types from BoltDB.
type Finder interface {
// One returns one record by the specified index
One(fieldName string, value interface{}, to interface{}) error
// Find returns one or more records by the specified index
Find(fieldName string, value interface{}, to interface{}, options ...func(q *index.Options)) error
// AllByIndex gets all the records of a bucket that are indexed in the specified index
AllByIndex(fieldName string, to interface{}, options ...func(*index.Options)) error
// All gets all the records of a bucket.
// If there are no records it returns no error and the 'to' parameter is set to an empty slice.
All(to interface{}, options ...func(*index.Options)) error
// Select a list of records that match a list of matchers. Doesn't use indexes.
Select(matchers ...q.Matcher) Query
// Range returns one or more records by the specified index within the specified range
Range(fieldName string, min, max, to interface{}, options ...func(*index.Options)) error
// Prefix returns one or more records whose given field starts with the specified prefix.
Prefix(fieldName string, prefix string, to interface{}, options ...func(*index.Options)) error
// Count counts all the records of a bucket
Count(data interface{}) (int, error)
}
// One returns one record by the specified index
func (n *node) One(fieldName string, value interface{}, to interface{}) error {
sink, err := newFirstSink(n, to)
if err != nil {
return err
}
bucketName := sink.bucketName()
if bucketName == "" {
return ErrNoName
}
if fieldName == "" {
return ErrNotFound
}
ref := reflect.Indirect(sink.ref)
cfg, err := extractSingleField(&ref, fieldName)
if err != nil {
return err
}
field, ok := cfg.Fields[fieldName]
if !ok || (!field.IsID && field.Index == "") {
query := newQuery(n, q.StrictEq(fieldName, value))
query.Limit(1)
if n.tx != nil {
err = query.query(n.tx, sink)
} else {
err = n.s.Bolt.View(func(tx *bolt.Tx) error {
return query.query(tx, sink)
})
}
if err != nil {
return err
}
return sink.flush()
}
val, err := toBytes(value, n.codec)
if err != nil {
return err
}
return n.readTx(func(tx *bolt.Tx) error {
return n.one(tx, bucketName, fieldName, cfg, to, val, field.IsID)
})
}
func (n *node) one(tx *bolt.Tx, bucketName, fieldName string, cfg *structConfig, to interface{}, val []byte, skipIndex bool) error {
bucket := n.GetBucket(tx, bucketName)
if bucket == nil {
return ErrNotFound
}
var id []byte
if !skipIndex {
idx, err := getIndex(bucket, cfg.Fields[fieldName].Index, fieldName)
if err != nil {
if err == index.ErrNotFound {
return ErrNotFound
}
return err
}
id = idx.Get(val)
} else {
id = val
}
if id == nil {
return ErrNotFound
}
raw := bucket.Get(id)
if raw == nil {
return ErrNotFound
}
return n.codec.Unmarshal(raw, to)
}
// Find returns one or more records by the specified index
func (n *node) Find(fieldName string, value interface{}, to interface{}, options ...func(q *index.Options)) error {
sink, err := newListSink(n, to)
if err != nil {
return err
}
bucketName := sink.bucketName()
if bucketName == "" {
return ErrNoName
}
ref := reflect.Indirect(reflect.New(sink.elemType))
cfg, err := extractSingleField(&ref, fieldName)
if err != nil {
return err
}
opts := index.NewOptions()
for _, fn := range options {
fn(opts)
}
field, ok := cfg.Fields[fieldName]
if !ok || (!field.IsID && (field.Index == "" || value == nil)) {
query := newQuery(n, q.Eq(fieldName, value))
query.Skip(opts.Skip).Limit(opts.Limit)
if opts.Reverse {
query.Reverse()
}
err = n.readTx(func(tx *bolt.Tx) error {
return query.query(tx, sink)
})
if err != nil {
return err
}
return sink.flush()
}
val, err := toBytes(value, n.codec)
if err != nil {
return err
}
return n.readTx(func(tx *bolt.Tx) error {
return n.find(tx, bucketName, fieldName, cfg, sink, val, opts)
})
}
func (n *node) find(tx *bolt.Tx, bucketName, fieldName string, cfg *structConfig, sink *listSink, val []byte, opts *index.Options) error {
bucket := n.GetBucket(tx, bucketName)
if bucket == nil {
return ErrNotFound
}
idx, err := getIndex(bucket, cfg.Fields[fieldName].Index, fieldName)
if err != nil {
return err
}
list, err := idx.All(val, opts)
if err != nil {
if err == index.ErrNotFound {
return ErrNotFound
}
return err
}
sink.results = reflect.MakeSlice(reflect.Indirect(sink.ref).Type(), len(list), len(list))
sorter := newSorter(n, sink)
for i := range list {
raw := bucket.Get(list[i])
if raw == nil {
return ErrNotFound
}
if _, err := sorter.filter(nil, bucket, list[i], raw); err != nil {
return err
}
}
return sorter.flush()
}
// AllByIndex gets all the records of a bucket that are indexed in the specified index
func (n *node) AllByIndex(fieldName string, to interface{}, options ...func(*index.Options)) error {
if fieldName == "" {
return n.All(to, options...)
}
ref := reflect.ValueOf(to)
if ref.Kind() != reflect.Ptr || ref.Elem().Kind() != reflect.Slice {
return ErrSlicePtrNeeded
}
typ := reflect.Indirect(ref).Type().Elem()
if typ.Kind() == reflect.Ptr {
typ = typ.Elem()
}
newElem := reflect.New(typ)
cfg, err := extract(&newElem)
if err != nil {
return err
}
if cfg.ID.Name == fieldName {
return n.All(to, options...)
}
opts := index.NewOptions()
for _, fn := range options {
fn(opts)
}
return n.readTx(func(tx *bolt.Tx) error {
return n.allByIndex(tx, fieldName, cfg, &ref, opts)
})
}
func (n *node) allByIndex(tx *bolt.Tx, fieldName string, cfg *structConfig, ref *reflect.Value, opts *index.Options) error {
bucket := n.GetBucket(tx, cfg.Name)
if bucket == nil {
return ErrNotFound
}
fieldCfg, ok := cfg.Fields[fieldName]
if !ok {
return ErrNotFound
}
idx, err := getIndex(bucket, fieldCfg.Index, fieldName)
if err != nil {
return err
}
list, err := idx.AllRecords(opts)
if err != nil {
if err == index.ErrNotFound {
return ErrNotFound
}
return err
}
results := reflect.MakeSlice(reflect.Indirect(*ref).Type(), len(list), len(list))
for i := range list {
raw := bucket.Get(list[i])
if raw == nil {
return ErrNotFound
}
err = n.codec.Unmarshal(raw, results.Index(i).Addr().Interface())
if err != nil {
return err
}
}
reflect.Indirect(*ref).Set(results)
return nil
}
// All gets all the records of a bucket.
// If there are no records it returns no error and the 'to' parameter is set to an empty slice.
func (n *node) All(to interface{}, options ...func(*index.Options)) error {
opts := index.NewOptions()
for _, fn := range options {
fn(opts)
}
query := newQuery(n, nil).Limit(opts.Limit).Skip(opts.Skip)
if opts.Reverse {
query.Reverse()
}
err := query.Find(to)
if err != nil && err != ErrNotFound {
return err
}
if err == ErrNotFound {
ref := reflect.ValueOf(to)
results := reflect.MakeSlice(reflect.Indirect(ref).Type(), 0, 0)
reflect.Indirect(ref).Set(results)
}
return nil
}
// Range returns one or more records by the specified index within the specified range
func (n *node) Range(fieldName string, min, max, to interface{}, options ...func(*index.Options)) error {
sink, err := newListSink(n, to)
if err != nil {
return err
}
bucketName := sink.bucketName()
if bucketName == "" {
return ErrNoName
}
ref := reflect.Indirect(reflect.New(sink.elemType))
cfg, err := extractSingleField(&ref, fieldName)
if err != nil {
return err
}
opts := index.NewOptions()
for _, fn := range options {
fn(opts)
}
field, ok := cfg.Fields[fieldName]
if !ok || (!field.IsID && field.Index == "") {
query := newQuery(n, q.And(q.Gte(fieldName, min), q.Lte(fieldName, max)))
query.Skip(opts.Skip).Limit(opts.Limit)
if opts.Reverse {
query.Reverse()
}
err = n.readTx(func(tx *bolt.Tx) error {
return query.query(tx, sink)
})
if err != nil {
return err
}
return sink.flush()
}
mn, err := toBytes(min, n.codec)
if err != nil {
return err
}
mx, err := toBytes(max, n.codec)
if err != nil {
return err
}
return n.readTx(func(tx *bolt.Tx) error {
return n.rnge(tx, bucketName, fieldName, cfg, sink, mn, mx, opts)
})
}
func (n *node) rnge(tx *bolt.Tx, bucketName, fieldName string, cfg *structConfig, sink *listSink, min, max []byte, opts *index.Options) error {
bucket := n.GetBucket(tx, bucketName)
if bucket == nil {
reflect.Indirect(sink.ref).SetLen(0)
return nil
}
idx, err := getIndex(bucket, cfg.Fields[fieldName].Index, fieldName)
if err != nil {
return err
}
list, err := idx.Range(min, max, opts)
if err != nil {
return err
}
sink.results = reflect.MakeSlice(reflect.Indirect(sink.ref).Type(), len(list), len(list))
sorter := newSorter(n, sink)
for i := range list {
raw := bucket.Get(list[i])
if raw == nil {
return ErrNotFound
}
if _, err := sorter.filter(nil, bucket, list[i], raw); err != nil {
return err
}
}
return sorter.flush()
}
// Prefix returns one or more records whose given field starts with the specified prefix.
func (n *node) Prefix(fieldName string, prefix string, to interface{}, options ...func(*index.Options)) error {
sink, err := newListSink(n, to)
if err != nil {
return err
}
bucketName := sink.bucketName()
if bucketName == "" {
return ErrNoName
}
ref := reflect.Indirect(reflect.New(sink.elemType))
cfg, err := extractSingleField(&ref, fieldName)
if err != nil {
return err
}
opts := index.NewOptions()
for _, fn := range options {
fn(opts)
}
field, ok := cfg.Fields[fieldName]
if !ok || (!field.IsID && field.Index == "") {
query := newQuery(n, q.Re(fieldName, fmt.Sprintf("^%s", prefix)))
query.Skip(opts.Skip).Limit(opts.Limit)
if opts.Reverse {
query.Reverse()
}
err = n.readTx(func(tx *bolt.Tx) error {
return query.query(tx, sink)
})
if err != nil {
return err
}
return sink.flush()
}
prfx, err := toBytes(prefix, n.codec)
if err != nil {
return err
}
return n.readTx(func(tx *bolt.Tx) error {
return n.prefix(tx, bucketName, fieldName, cfg, sink, prfx, opts)
})
}
func (n *node) prefix(tx *bolt.Tx, bucketName, fieldName string, cfg *structConfig, sink *listSink, prefix []byte, opts *index.Options) error {
bucket := n.GetBucket(tx, bucketName)
if bucket == nil {
reflect.Indirect(sink.ref).SetLen(0)
return nil
}
idx, err := getIndex(bucket, cfg.Fields[fieldName].Index, fieldName)
if err != nil {
return err
}
list, err := idx.Prefix(prefix, opts)
if err != nil {
return err
}
sink.results = reflect.MakeSlice(reflect.Indirect(sink.ref).Type(), len(list), len(list))
sorter := newSorter(n, sink)
for i := range list {
raw := bucket.Get(list[i])
if raw == nil {
return ErrNotFound
}
if _, err := sorter.filter(nil, bucket, list[i], raw); err != nil {
return err
}
}
return sorter.flush()
}
// Count counts all the records of a bucket
func (n *node) Count(data interface{}) (int, error) {
return n.Select().Count(data)
}

14
vendor/github.com/asdine/storm/index/errors.go generated vendored Normal file
View File

@ -0,0 +1,14 @@
package index
import "errors"
var (
// ErrNotFound is returned when the specified record is not saved in the bucket.
ErrNotFound = errors.New("not found")
// ErrAlreadyExists is returned uses when trying to set an existing value on a field that has a unique index.
ErrAlreadyExists = errors.New("already exists")
// ErrNilParam is returned when the specified param is expected to be not nil.
ErrNilParam = errors.New("param must not be nil")
)

14
vendor/github.com/asdine/storm/index/indexes.go generated vendored Normal file
View File

@ -0,0 +1,14 @@
// Package index contains Index engines used to store values and their corresponding IDs
package index
// Index interface
type Index interface {
Add(value []byte, targetID []byte) error
Remove(value []byte) error
RemoveID(id []byte) error
Get(value []byte) []byte
All(value []byte, opts *Options) ([][]byte, error)
AllRecords(opts *Options) ([][]byte, error)
Range(min []byte, max []byte, opts *Options) ([][]byte, error)
Prefix(prefix []byte, opts *Options) ([][]byte, error)
}

283
vendor/github.com/asdine/storm/index/list.go generated vendored Normal file
View File

@ -0,0 +1,283 @@
package index
import (
"bytes"
"github.com/asdine/storm/internal"
bolt "go.etcd.io/bbolt"
)
// NewListIndex loads a ListIndex
func NewListIndex(parent *bolt.Bucket, indexName []byte) (*ListIndex, error) {
var err error
b := parent.Bucket(indexName)
if b == nil {
if !parent.Writable() {
return nil, ErrNotFound
}
b, err = parent.CreateBucket(indexName)
if err != nil {
return nil, err
}
}
ids, err := NewUniqueIndex(b, []byte("storm__ids"))
if err != nil {
return nil, err
}
return &ListIndex{
IndexBucket: b,
Parent: parent,
IDs: ids,
}, nil
}
// ListIndex is an index that references values and the corresponding IDs.
type ListIndex struct {
Parent *bolt.Bucket
IndexBucket *bolt.Bucket
IDs *UniqueIndex
}
// Add a value to the list index
func (idx *ListIndex) Add(newValue []byte, targetID []byte) error {
if newValue == nil || len(newValue) == 0 {
return ErrNilParam
}
if targetID == nil || len(targetID) == 0 {
return ErrNilParam
}
key := idx.IDs.Get(targetID)
if key != nil {
err := idx.IndexBucket.Delete(key)
if err != nil {
return err
}
err = idx.IDs.Remove(targetID)
if err != nil {
return err
}
key = key[:0]
}
key = append(key, newValue...)
key = append(key, '_')
key = append(key, '_')
key = append(key, targetID...)
err := idx.IDs.Add(targetID, key)
if err != nil {
return err
}
return idx.IndexBucket.Put(key, targetID)
}
// Remove a value from the unique index
func (idx *ListIndex) Remove(value []byte) error {
var err error
var keys [][]byte
c := idx.IndexBucket.Cursor()
prefix := generatePrefix(value)
for k, _ := c.Seek(prefix); bytes.HasPrefix(k, prefix); k, _ = c.Next() {
keys = append(keys, k)
}
for _, k := range keys {
err = idx.IndexBucket.Delete(k)
if err != nil {
return err
}
}
return idx.IDs.RemoveID(value)
}
// RemoveID removes an ID from the list index
func (idx *ListIndex) RemoveID(targetID []byte) error {
value := idx.IDs.Get(targetID)
if value == nil {
return nil
}
err := idx.IndexBucket.Delete(value)
if err != nil {
return err
}
return idx.IDs.Remove(targetID)
}
// Get the first ID corresponding to the given value
func (idx *ListIndex) Get(value []byte) []byte {
c := idx.IndexBucket.Cursor()
prefix := generatePrefix(value)
for k, id := c.Seek(prefix); bytes.HasPrefix(k, prefix); k, id = c.Next() {
return id
}
return nil
}
// All the IDs corresponding to the given value
func (idx *ListIndex) All(value []byte, opts *Options) ([][]byte, error) {
var list [][]byte
c := idx.IndexBucket.Cursor()
cur := internal.Cursor{C: c, Reverse: opts != nil && opts.Reverse}
prefix := generatePrefix(value)
k, id := c.Seek(prefix)
if cur.Reverse {
var count int
kc := k
idc := id
for ; kc != nil && bytes.HasPrefix(kc, prefix); kc, idc = c.Next() {
count++
k, id = kc, idc
}
if kc != nil {
k, id = c.Prev()
}
list = make([][]byte, 0, count)
}
for ; bytes.HasPrefix(k, prefix); k, id = cur.Next() {
if opts != nil && opts.Skip > 0 {
opts.Skip--
continue
}
if opts != nil && opts.Limit == 0 {
break
}
if opts != nil && opts.Limit > 0 {
opts.Limit--
}
list = append(list, id)
}
return list, nil
}
// AllRecords returns all the IDs of this index
func (idx *ListIndex) AllRecords(opts *Options) ([][]byte, error) {
var list [][]byte
c := internal.Cursor{C: idx.IndexBucket.Cursor(), Reverse: opts != nil && opts.Reverse}
for k, id := c.First(); k != nil; k, id = c.Next() {
if id == nil || bytes.Equal(k, []byte("storm__ids")) {
continue
}
if opts != nil && opts.Skip > 0 {
opts.Skip--
continue
}
if opts != nil && opts.Limit == 0 {
break
}
if opts != nil && opts.Limit > 0 {
opts.Limit--
}
list = append(list, id)
}
return list, nil
}
// Range returns the ids corresponding to the given range of values
func (idx *ListIndex) Range(min []byte, max []byte, opts *Options) ([][]byte, error) {
var list [][]byte
c := internal.RangeCursor{
C: idx.IndexBucket.Cursor(),
Reverse: opts != nil && opts.Reverse,
Min: min,
Max: max,
CompareFn: func(val, limit []byte) int {
pos := bytes.LastIndex(val, []byte("__"))
return bytes.Compare(val[:pos], limit)
},
}
for k, id := c.First(); c.Continue(k); k, id = c.Next() {
if id == nil || bytes.Equal(k, []byte("storm__ids")) {
continue
}
if opts != nil && opts.Skip > 0 {
opts.Skip--
continue
}
if opts != nil && opts.Limit == 0 {
break
}
if opts != nil && opts.Limit > 0 {
opts.Limit--
}
list = append(list, id)
}
return list, nil
}
// Prefix returns the ids whose values have the given prefix.
func (idx *ListIndex) Prefix(prefix []byte, opts *Options) ([][]byte, error) {
var list [][]byte
c := internal.PrefixCursor{
C: idx.IndexBucket.Cursor(),
Reverse: opts != nil && opts.Reverse,
Prefix: prefix,
}
for k, id := c.First(); k != nil && c.Continue(k); k, id = c.Next() {
if id == nil || bytes.Equal(k, []byte("storm__ids")) {
continue
}
if opts != nil && opts.Skip > 0 {
opts.Skip--
continue
}
if opts != nil && opts.Limit == 0 {
break
}
if opts != nil && opts.Limit > 0 {
opts.Limit--
}
list = append(list, id)
}
return list, nil
}
func generatePrefix(value []byte) []byte {
prefix := make([]byte, len(value)+2)
var i int
for i = range value {
prefix[i] = value[i]
}
prefix[i+1] = '_'
prefix[i+2] = '_'
return prefix
}

15
vendor/github.com/asdine/storm/index/options.go generated vendored Normal file
View File

@ -0,0 +1,15 @@
package index
// NewOptions creates initialized Options
func NewOptions() *Options {
return &Options{
Limit: -1,
}
}
// Options are used to customize queries
type Options struct {
Limit int
Skip int
Reverse bool
}

183
vendor/github.com/asdine/storm/index/unique.go generated vendored Normal file
View File

@ -0,0 +1,183 @@
package index
import (
"bytes"
"github.com/asdine/storm/internal"
bolt "go.etcd.io/bbolt"
)
// NewUniqueIndex loads a UniqueIndex
func NewUniqueIndex(parent *bolt.Bucket, indexName []byte) (*UniqueIndex, error) {
var err error
b := parent.Bucket(indexName)
if b == nil {
if !parent.Writable() {
return nil, ErrNotFound
}
b, err = parent.CreateBucket(indexName)
if err != nil {
return nil, err
}
}
return &UniqueIndex{
IndexBucket: b,
Parent: parent,
}, nil
}
// UniqueIndex is an index that references unique values and the corresponding ID.
type UniqueIndex struct {
Parent *bolt.Bucket
IndexBucket *bolt.Bucket
}
// Add a value to the unique index
func (idx *UniqueIndex) Add(value []byte, targetID []byte) error {
if value == nil || len(value) == 0 {
return ErrNilParam
}
if targetID == nil || len(targetID) == 0 {
return ErrNilParam
}
exists := idx.IndexBucket.Get(value)
if exists != nil {
if bytes.Equal(exists, targetID) {
return nil
}
return ErrAlreadyExists
}
return idx.IndexBucket.Put(value, targetID)
}
// Remove a value from the unique index
func (idx *UniqueIndex) Remove(value []byte) error {
return idx.IndexBucket.Delete(value)
}
// RemoveID removes an ID from the unique index
func (idx *UniqueIndex) RemoveID(id []byte) error {
c := idx.IndexBucket.Cursor()
for val, ident := c.First(); val != nil; val, ident = c.Next() {
if bytes.Equal(ident, id) {
return idx.Remove(val)
}
}
return nil
}
// Get the id corresponding to the given value
func (idx *UniqueIndex) Get(value []byte) []byte {
return idx.IndexBucket.Get(value)
}
// All returns all the ids corresponding to the given value
func (idx *UniqueIndex) All(value []byte, opts *Options) ([][]byte, error) {
id := idx.IndexBucket.Get(value)
if id != nil {
return [][]byte{id}, nil
}
return nil, nil
}
// AllRecords returns all the IDs of this index
func (idx *UniqueIndex) AllRecords(opts *Options) ([][]byte, error) {
var list [][]byte
c := internal.Cursor{C: idx.IndexBucket.Cursor(), Reverse: opts != nil && opts.Reverse}
for val, ident := c.First(); val != nil; val, ident = c.Next() {
if opts != nil && opts.Skip > 0 {
opts.Skip--
continue
}
if opts != nil && opts.Limit == 0 {
break
}
if opts != nil && opts.Limit > 0 {
opts.Limit--
}
list = append(list, ident)
}
return list, nil
}
// Range returns the ids corresponding to the given range of values
func (idx *UniqueIndex) Range(min []byte, max []byte, opts *Options) ([][]byte, error) {
var list [][]byte
c := internal.RangeCursor{
C: idx.IndexBucket.Cursor(),
Reverse: opts != nil && opts.Reverse,
Min: min,
Max: max,
CompareFn: func(val, limit []byte) int {
return bytes.Compare(val, limit)
},
}
for val, ident := c.First(); val != nil && c.Continue(val); val, ident = c.Next() {
if opts != nil && opts.Skip > 0 {
opts.Skip--
continue
}
if opts != nil && opts.Limit == 0 {
break
}
if opts != nil && opts.Limit > 0 {
opts.Limit--
}
list = append(list, ident)
}
return list, nil
}
// Prefix returns the ids whose values have the given prefix.
func (idx *UniqueIndex) Prefix(prefix []byte, opts *Options) ([][]byte, error) {
var list [][]byte
c := internal.PrefixCursor{
C: idx.IndexBucket.Cursor(),
Reverse: opts != nil && opts.Reverse,
Prefix: prefix,
}
for val, ident := c.First(); val != nil && c.Continue(val); val, ident = c.Next() {
if opts != nil && opts.Skip > 0 {
opts.Skip--
continue
}
if opts != nil && opts.Limit == 0 {
break
}
if opts != nil && opts.Limit > 0 {
opts.Limit--
}
list = append(list, ident)
}
return list, nil
}
// first returns the first ID of this index
func (idx *UniqueIndex) first() []byte {
c := idx.IndexBucket.Cursor()
for val, ident := c.First(); val != nil; val, ident = c.Next() {
return ident
}
return nil
}

121
vendor/github.com/asdine/storm/internal/boltdb.go generated vendored Normal file
View File

@ -0,0 +1,121 @@
package internal
import (
"bytes"
bolt "go.etcd.io/bbolt"
)
// Cursor that can be reversed
type Cursor struct {
C *bolt.Cursor
Reverse bool
}
// First element
func (c *Cursor) First() ([]byte, []byte) {
if c.Reverse {
return c.C.Last()
}
return c.C.First()
}
// Next element
func (c *Cursor) Next() ([]byte, []byte) {
if c.Reverse {
return c.C.Prev()
}
return c.C.Next()
}
// RangeCursor that can be reversed
type RangeCursor struct {
C *bolt.Cursor
Reverse bool
Min []byte
Max []byte
CompareFn func([]byte, []byte) int
}
// First element
func (c *RangeCursor) First() ([]byte, []byte) {
if c.Reverse {
k, v := c.C.Seek(c.Max)
// If Seek doesn't find a key it goes to the next.
// If so, we need to get the previous one to avoid
// including bigger values. #218
if !bytes.HasPrefix(k, c.Max) && k != nil {
k, v = c.C.Prev()
}
return k, v
}
return c.C.Seek(c.Min)
}
// Next element
func (c *RangeCursor) Next() ([]byte, []byte) {
if c.Reverse {
return c.C.Prev()
}
return c.C.Next()
}
// Continue tells if the loop needs to continue
func (c *RangeCursor) Continue(val []byte) bool {
if c.Reverse {
return val != nil && c.CompareFn(val, c.Min) >= 0
}
return val != nil && c.CompareFn(val, c.Max) <= 0
}
// PrefixCursor that can be reversed
type PrefixCursor struct {
C *bolt.Cursor
Reverse bool
Prefix []byte
}
// First element
func (c *PrefixCursor) First() ([]byte, []byte) {
var k, v []byte
for k, v = c.C.First(); k != nil && !bytes.HasPrefix(k, c.Prefix); k, v = c.C.Next() {
}
if k == nil {
return nil, nil
}
if c.Reverse {
kc, vc := k, v
for ; kc != nil && bytes.HasPrefix(kc, c.Prefix); kc, vc = c.C.Next() {
k, v = kc, vc
}
if kc != nil {
k, v = c.C.Prev()
}
}
return k, v
}
// Next element
func (c *PrefixCursor) Next() ([]byte, []byte) {
if c.Reverse {
return c.C.Prev()
}
return c.C.Next()
}
// Continue tells if the loop needs to continue
func (c *PrefixCursor) Continue(val []byte) bool {
return val != nil && bytes.HasPrefix(val, c.Prefix)
}

170
vendor/github.com/asdine/storm/kv.go generated vendored Normal file
View File

@ -0,0 +1,170 @@
package storm
import (
"reflect"
bolt "go.etcd.io/bbolt"
)
// KeyValueStore can store and fetch values by key
type KeyValueStore interface {
// Get a value from a bucket
Get(bucketName string, key interface{}, to interface{}) error
// Set a key/value pair into a bucket
Set(bucketName string, key interface{}, value interface{}) error
// Delete deletes a key from a bucket
Delete(bucketName string, key interface{}) error
// GetBytes gets a raw value from a bucket.
GetBytes(bucketName string, key interface{}) ([]byte, error)
// SetBytes sets a raw value into a bucket.
SetBytes(bucketName string, key interface{}, value []byte) error
// KeyExists reports the presence of a key in a bucket.
KeyExists(bucketName string, key interface{}) (bool, error)
}
// GetBytes gets a raw value from a bucket.
func (n *node) GetBytes(bucketName string, key interface{}) ([]byte, error) {
id, err := toBytes(key, n.codec)
if err != nil {
return nil, err
}
var val []byte
return val, n.readTx(func(tx *bolt.Tx) error {
raw, err := n.getBytes(tx, bucketName, id)
if err != nil {
return err
}
val = make([]byte, len(raw))
copy(val, raw)
return nil
})
}
// GetBytes gets a raw value from a bucket.
func (n *node) getBytes(tx *bolt.Tx, bucketName string, id []byte) ([]byte, error) {
bucket := n.GetBucket(tx, bucketName)
if bucket == nil {
return nil, ErrNotFound
}
raw := bucket.Get(id)
if raw == nil {
return nil, ErrNotFound
}
return raw, nil
}
// SetBytes sets a raw value into a bucket.
func (n *node) SetBytes(bucketName string, key interface{}, value []byte) error {
if key == nil {
return ErrNilParam
}
id, err := toBytes(key, n.codec)
if err != nil {
return err
}
return n.readWriteTx(func(tx *bolt.Tx) error {
return n.setBytes(tx, bucketName, id, value)
})
}
func (n *node) setBytes(tx *bolt.Tx, bucketName string, id, data []byte) error {
bucket, err := n.CreateBucketIfNotExists(tx, bucketName)
if err != nil {
return err
}
// save node configuration in the bucket
_, err = newMeta(bucket, n)
if err != nil {
return err
}
return bucket.Put(id, data)
}
// Get a value from a bucket
func (n *node) Get(bucketName string, key interface{}, to interface{}) error {
ref := reflect.ValueOf(to)
if !ref.IsValid() || ref.Kind() != reflect.Ptr {
return ErrPtrNeeded
}
id, err := toBytes(key, n.codec)
if err != nil {
return err
}
return n.readTx(func(tx *bolt.Tx) error {
raw, err := n.getBytes(tx, bucketName, id)
if err != nil {
return err
}
return n.codec.Unmarshal(raw, to)
})
}
// Set a key/value pair into a bucket
func (n *node) Set(bucketName string, key interface{}, value interface{}) error {
var data []byte
var err error
if value != nil {
data, err = n.codec.Marshal(value)
if err != nil {
return err
}
}
return n.SetBytes(bucketName, key, data)
}
// Delete deletes a key from a bucket
func (n *node) Delete(bucketName string, key interface{}) error {
id, err := toBytes(key, n.codec)
if err != nil {
return err
}
return n.readWriteTx(func(tx *bolt.Tx) error {
return n.delete(tx, bucketName, id)
})
}
func (n *node) delete(tx *bolt.Tx, bucketName string, id []byte) error {
bucket := n.GetBucket(tx, bucketName)
if bucket == nil {
return ErrNotFound
}
return bucket.Delete(id)
}
// KeyExists reports the presence of a key in a bucket.
func (n *node) KeyExists(bucketName string, key interface{}) (bool, error) {
id, err := toBytes(key, n.codec)
if err != nil {
return false, err
}
var exists bool
return exists, n.readTx(func(tx *bolt.Tx) error {
bucket := n.GetBucket(tx, bucketName)
if bucket == nil {
return ErrNotFound
}
v := bucket.Get(id)
if v != nil {
exists = true
}
return nil
})
}

69
vendor/github.com/asdine/storm/metadata.go generated vendored Normal file
View File

@ -0,0 +1,69 @@
package storm
import (
"reflect"
bolt "go.etcd.io/bbolt"
)
const (
metaCodec = "codec"
)
func newMeta(b *bolt.Bucket, n Node) (*meta, error) {
m := b.Bucket([]byte(metadataBucket))
if m != nil {
name := m.Get([]byte(metaCodec))
if string(name) != n.Codec().Name() {
return nil, ErrDifferentCodec
}
return &meta{
node: n,
bucket: m,
}, nil
}
m, err := b.CreateBucket([]byte(metadataBucket))
if err != nil {
return nil, err
}
m.Put([]byte(metaCodec), []byte(n.Codec().Name()))
return &meta{
node: n,
bucket: m,
}, nil
}
type meta struct {
node Node
bucket *bolt.Bucket
}
func (m *meta) increment(field *fieldConfig) error {
var err error
counter := field.IncrementStart
raw := m.bucket.Get([]byte(field.Name + "counter"))
if raw != nil {
counter, err = numberfromb(raw)
if err != nil {
return err
}
counter++
}
raw, err = numbertob(counter)
if err != nil {
return err
}
err = m.bucket.Put([]byte(field.Name+"counter"), raw)
if err != nil {
return err
}
field.Value.Set(reflect.ValueOf(counter).Convert(field.Value.Type()))
field.IsZero = false
return nil
}

126
vendor/github.com/asdine/storm/node.go generated vendored Normal file
View File

@ -0,0 +1,126 @@
package storm
import (
"github.com/asdine/storm/codec"
bolt "go.etcd.io/bbolt"
)
// A Node in Storm represents the API to a BoltDB bucket.
type Node interface {
Tx
TypeStore
KeyValueStore
BucketScanner
// From returns a new Storm node with a new bucket root below the current.
// All DB operations on the new node will be executed relative to this bucket.
From(addend ...string) Node
// Bucket returns the bucket name as a slice from the root.
// In the normal, simple case this will be empty.
Bucket() []string
// GetBucket returns the given bucket below the current node.
GetBucket(tx *bolt.Tx, children ...string) *bolt.Bucket
// CreateBucketIfNotExists creates the bucket below the current node if it doesn't
// already exist.
CreateBucketIfNotExists(tx *bolt.Tx, bucket string) (*bolt.Bucket, error)
// WithTransaction returns a New Storm node that will use the given transaction.
WithTransaction(tx *bolt.Tx) Node
// Begin starts a new transaction.
Begin(writable bool) (Node, error)
// Codec used by this instance of Storm
Codec() codec.MarshalUnmarshaler
// WithCodec returns a New Storm Node that will use the given Codec.
WithCodec(codec codec.MarshalUnmarshaler) Node
// WithBatch returns a new Storm Node with the batch mode enabled.
WithBatch(enabled bool) Node
}
// A Node in Storm represents the API to a BoltDB bucket.
type node struct {
s *DB
// The root bucket. In the normal, simple case this will be empty.
rootBucket []string
// Transaction object. Nil if not in transaction
tx *bolt.Tx
// Codec of this node
codec codec.MarshalUnmarshaler
// Enable batch mode for read-write transaction, instead of update mode
batchMode bool
}
// From returns a new Storm Node with a new bucket root below the current.
// All DB operations on the new node will be executed relative to this bucket.
func (n node) From(addend ...string) Node {
n.rootBucket = append(n.rootBucket, addend...)
return &n
}
// WithTransaction returns a new Storm Node that will use the given transaction.
func (n node) WithTransaction(tx *bolt.Tx) Node {
n.tx = tx
return &n
}
// WithCodec returns a new Storm Node that will use the given Codec.
func (n node) WithCodec(codec codec.MarshalUnmarshaler) Node {
n.codec = codec
return &n
}
// WithBatch returns a new Storm Node with the batch mode enabled.
func (n node) WithBatch(enabled bool) Node {
n.batchMode = enabled
return &n
}
// Bucket returns the bucket name as a slice from the root.
// In the normal, simple case this will be empty.
func (n *node) Bucket() []string {
return n.rootBucket
}
// Codec returns the EncodeDecoder used by this instance of Storm
func (n *node) Codec() codec.MarshalUnmarshaler {
return n.codec
}
// Detects if already in transaction or runs a read write transaction.
// Uses batch mode if enabled.
func (n *node) readWriteTx(fn func(tx *bolt.Tx) error) error {
if n.tx != nil {
return fn(n.tx)
}
if n.batchMode {
return n.s.Bolt.Batch(func(tx *bolt.Tx) error {
return fn(tx)
})
}
return n.s.Bolt.Update(func(tx *bolt.Tx) error {
return fn(tx)
})
}
// Detects if already in transaction or runs a read transaction.
func (n *node) readTx(fn func(tx *bolt.Tx) error) error {
if n.tx != nil {
return fn(n.tx)
}
return n.s.Bolt.View(func(tx *bolt.Tx) error {
return fn(tx)
})
}

97
vendor/github.com/asdine/storm/options.go generated vendored Normal file
View File

@ -0,0 +1,97 @@
package storm
import (
"os"
"github.com/asdine/storm/codec"
"github.com/asdine/storm/index"
bolt "go.etcd.io/bbolt"
)
// BoltOptions used to pass options to BoltDB.
func BoltOptions(mode os.FileMode, options *bolt.Options) func(*Options) error {
return func(opts *Options) error {
opts.boltMode = mode
opts.boltOptions = options
return nil
}
}
// Codec used to set a custom encoder and decoder. The default is JSON.
func Codec(c codec.MarshalUnmarshaler) func(*Options) error {
return func(opts *Options) error {
opts.codec = c
return nil
}
}
// Batch enables the use of batch instead of update for read-write transactions.
func Batch() func(*Options) error {
return func(opts *Options) error {
opts.batchMode = true
return nil
}
}
// Root used to set the root bucket. See also the From method.
func Root(root ...string) func(*Options) error {
return func(opts *Options) error {
opts.rootBucket = root
return nil
}
}
// UseDB allows Storm to use an existing open Bolt.DB.
// Warning: storm.DB.Close() will close the bolt.DB instance.
func UseDB(b *bolt.DB) func(*Options) error {
return func(opts *Options) error {
opts.path = b.Path()
opts.bolt = b
return nil
}
}
// Limit sets the maximum number of records to return
func Limit(limit int) func(*index.Options) {
return func(opts *index.Options) {
opts.Limit = limit
}
}
// Skip sets the number of records to skip
func Skip(offset int) func(*index.Options) {
return func(opts *index.Options) {
opts.Skip = offset
}
}
// Reverse will return the results in descending order
func Reverse() func(*index.Options) {
return func(opts *index.Options) {
opts.Reverse = true
}
}
// Options are used to customize the way Storm opens a database.
type Options struct {
// Handles encoding and decoding of objects
codec codec.MarshalUnmarshaler
// Bolt file mode
boltMode os.FileMode
// Bolt options
boltOptions *bolt.Options
// Enable batch mode for read-write transaction, instead of update mode
batchMode bool
// The root bucket name
rootBucket []string
// Path of the database file
path string
// Bolt is still easily accessible
bolt *bolt.DB
}

107
vendor/github.com/asdine/storm/q/compare.go generated vendored Normal file
View File

@ -0,0 +1,107 @@
package q
import (
"go/constant"
"go/token"
"reflect"
"strconv"
)
func compare(a, b interface{}, tok token.Token) bool {
vala := reflect.ValueOf(a)
valb := reflect.ValueOf(b)
ak := vala.Kind()
bk := valb.Kind()
switch {
// comparing nil values
case (ak == reflect.Ptr || ak == reflect.Slice || ak == reflect.Interface || ak == reflect.Invalid) &&
(bk == reflect.Ptr || ak == reflect.Slice || bk == reflect.Interface || bk == reflect.Invalid) &&
(!vala.IsValid() || vala.IsNil()) && (!valb.IsValid() || valb.IsNil()):
return true
case ak >= reflect.Int && ak <= reflect.Int64:
if bk >= reflect.Int && bk <= reflect.Int64 {
return constant.Compare(constant.MakeInt64(vala.Int()), tok, constant.MakeInt64(valb.Int()))
}
if bk >= reflect.Uint && bk <= reflect.Uint64 {
return constant.Compare(constant.MakeInt64(vala.Int()), tok, constant.MakeInt64(int64(valb.Uint())))
}
if bk == reflect.Float32 || bk == reflect.Float64 {
return constant.Compare(constant.MakeFloat64(float64(vala.Int())), tok, constant.MakeFloat64(valb.Float()))
}
if bk == reflect.String {
bla, err := strconv.ParseFloat(valb.String(), 64)
if err != nil {
return false
}
return constant.Compare(constant.MakeFloat64(float64(vala.Int())), tok, constant.MakeFloat64(bla))
}
case ak >= reflect.Uint && ak <= reflect.Uint64:
if bk >= reflect.Uint && bk <= reflect.Uint64 {
return constant.Compare(constant.MakeUint64(vala.Uint()), tok, constant.MakeUint64(valb.Uint()))
}
if bk >= reflect.Int && bk <= reflect.Int64 {
return constant.Compare(constant.MakeUint64(vala.Uint()), tok, constant.MakeUint64(uint64(valb.Int())))
}
if bk == reflect.Float32 || bk == reflect.Float64 {
return constant.Compare(constant.MakeFloat64(float64(vala.Uint())), tok, constant.MakeFloat64(valb.Float()))
}
if bk == reflect.String {
bla, err := strconv.ParseFloat(valb.String(), 64)
if err != nil {
return false
}
return constant.Compare(constant.MakeFloat64(float64(vala.Uint())), tok, constant.MakeFloat64(bla))
}
case ak == reflect.Float32 || ak == reflect.Float64:
if bk == reflect.Float32 || bk == reflect.Float64 {
return constant.Compare(constant.MakeFloat64(vala.Float()), tok, constant.MakeFloat64(valb.Float()))
}
if bk >= reflect.Int && bk <= reflect.Int64 {
return constant.Compare(constant.MakeFloat64(vala.Float()), tok, constant.MakeFloat64(float64(valb.Int())))
}
if bk >= reflect.Uint && bk <= reflect.Uint64 {
return constant.Compare(constant.MakeFloat64(vala.Float()), tok, constant.MakeFloat64(float64(valb.Uint())))
}
if bk == reflect.String {
bla, err := strconv.ParseFloat(valb.String(), 64)
if err != nil {
return false
}
return constant.Compare(constant.MakeFloat64(vala.Float()), tok, constant.MakeFloat64(bla))
}
case ak == reflect.String:
if bk == reflect.String {
return constant.Compare(constant.MakeString(vala.String()), tok, constant.MakeString(valb.String()))
}
}
if reflect.TypeOf(a).String() == "time.Time" && reflect.TypeOf(b).String() == "time.Time" {
var x, y int64
x = 1
if vala.MethodByName("Equal").Call([]reflect.Value{valb})[0].Bool() {
y = 1
} else if vala.MethodByName("Before").Call([]reflect.Value{valb})[0].Bool() {
y = 2
}
return constant.Compare(constant.MakeInt64(x), tok, constant.MakeInt64(y))
}
if tok == token.EQL {
return reflect.DeepEqual(a, b)
}
return false
}

67
vendor/github.com/asdine/storm/q/fieldmatcher.go generated vendored Normal file
View File

@ -0,0 +1,67 @@
package q
import (
"errors"
"go/token"
"reflect"
)
// ErrUnknownField is returned when an unknown field is passed.
var ErrUnknownField = errors.New("unknown field")
type fieldMatcherDelegate struct {
FieldMatcher
Field string
}
// NewFieldMatcher creates a Matcher for a given field.
func NewFieldMatcher(field string, fm FieldMatcher) Matcher {
return fieldMatcherDelegate{Field: field, FieldMatcher: fm}
}
// FieldMatcher can be used in NewFieldMatcher as a simple way to create the
// most common Matcher: A Matcher that evaluates one field's value.
// For more complex scenarios, implement the Matcher interface directly.
type FieldMatcher interface {
MatchField(v interface{}) (bool, error)
}
func (r fieldMatcherDelegate) Match(i interface{}) (bool, error) {
v := reflect.Indirect(reflect.ValueOf(i))
return r.MatchValue(&v)
}
func (r fieldMatcherDelegate) MatchValue(v *reflect.Value) (bool, error) {
field := v.FieldByName(r.Field)
if !field.IsValid() {
return false, ErrUnknownField
}
return r.MatchField(field.Interface())
}
// NewField2FieldMatcher creates a Matcher for a given field1 and field2.
func NewField2FieldMatcher(field1, field2 string, tok token.Token) Matcher {
return field2fieldMatcherDelegate{Field1: field1, Field2: field2, Tok: tok}
}
type field2fieldMatcherDelegate struct {
Field1, Field2 string
Tok token.Token
}
func (r field2fieldMatcherDelegate) Match(i interface{}) (bool, error) {
v := reflect.Indirect(reflect.ValueOf(i))
return r.MatchValue(&v)
}
func (r field2fieldMatcherDelegate) MatchValue(v *reflect.Value) (bool, error) {
field1 := v.FieldByName(r.Field1)
if !field1.IsValid() {
return false, ErrUnknownField
}
field2 := v.FieldByName(r.Field2)
if !field2.IsValid() {
return false, ErrUnknownField
}
return compare(field1.Interface(), field2.Interface(), r.Tok), nil
}

51
vendor/github.com/asdine/storm/q/regexp.go generated vendored Normal file
View File

@ -0,0 +1,51 @@
package q
import (
"fmt"
"regexp"
"sync"
)
// Re creates a regexp matcher. It checks if the given field matches the given regexp.
// Note that this only supports fields of type string or []byte.
func Re(field string, re string) Matcher {
regexpCache.RLock()
if r, ok := regexpCache.m[re]; ok {
regexpCache.RUnlock()
return NewFieldMatcher(field, &regexpMatcher{r: r})
}
regexpCache.RUnlock()
regexpCache.Lock()
r, err := regexp.Compile(re)
if err == nil {
regexpCache.m[re] = r
}
regexpCache.Unlock()
return NewFieldMatcher(field, &regexpMatcher{r: r, err: err})
}
var regexpCache = struct {
sync.RWMutex
m map[string]*regexp.Regexp
}{m: make(map[string]*regexp.Regexp)}
type regexpMatcher struct {
r *regexp.Regexp
err error
}
func (r *regexpMatcher) MatchField(v interface{}) (bool, error) {
if r.err != nil {
return false, r.err
}
switch fieldValue := v.(type) {
case string:
return r.r.MatchString(fieldValue), nil
case []byte:
return r.r.Match(fieldValue), nil
default:
return false, fmt.Errorf("Only string and []byte supported for regexp matcher, got %T", fieldValue)
}
}

247
vendor/github.com/asdine/storm/q/tree.go generated vendored Normal file
View File

@ -0,0 +1,247 @@
// Package q contains a list of Matchers used to compare struct fields with values
package q
import (
"go/token"
"reflect"
)
// A Matcher is used to test against a record to see if it matches.
type Matcher interface {
// Match is used to test the criteria against a structure.
Match(interface{}) (bool, error)
}
// A ValueMatcher is used to test against a reflect.Value.
type ValueMatcher interface {
// MatchValue tests if the given reflect.Value matches.
// It is useful when the reflect.Value of an object already exists.
MatchValue(*reflect.Value) (bool, error)
}
type cmp struct {
value interface{}
token token.Token
}
func (c *cmp) MatchField(v interface{}) (bool, error) {
return compare(v, c.value, c.token), nil
}
type trueMatcher struct{}
func (*trueMatcher) Match(i interface{}) (bool, error) {
return true, nil
}
func (*trueMatcher) MatchValue(v *reflect.Value) (bool, error) {
return true, nil
}
type or struct {
children []Matcher
}
func (c *or) Match(i interface{}) (bool, error) {
v := reflect.Indirect(reflect.ValueOf(i))
return c.MatchValue(&v)
}
func (c *or) MatchValue(v *reflect.Value) (bool, error) {
for _, matcher := range c.children {
if vm, ok := matcher.(ValueMatcher); ok {
ok, err := vm.MatchValue(v)
if err != nil {
return false, err
}
if ok {
return true, nil
}
continue
}
ok, err := matcher.Match(v.Interface())
if err != nil {
return false, err
}
if ok {
return true, nil
}
}
return false, nil
}
type and struct {
children []Matcher
}
func (c *and) Match(i interface{}) (bool, error) {
v := reflect.Indirect(reflect.ValueOf(i))
return c.MatchValue(&v)
}
func (c *and) MatchValue(v *reflect.Value) (bool, error) {
for _, matcher := range c.children {
if vm, ok := matcher.(ValueMatcher); ok {
ok, err := vm.MatchValue(v)
if err != nil {
return false, err
}
if !ok {
return false, nil
}
continue
}
ok, err := matcher.Match(v.Interface())
if err != nil {
return false, err
}
if !ok {
return false, nil
}
}
return true, nil
}
type strictEq struct {
field string
value interface{}
}
func (s *strictEq) MatchField(v interface{}) (bool, error) {
return reflect.DeepEqual(v, s.value), nil
}
type in struct {
list interface{}
}
func (i *in) MatchField(v interface{}) (bool, error) {
ref := reflect.ValueOf(i.list)
if ref.Kind() != reflect.Slice {
return false, nil
}
c := cmp{
token: token.EQL,
}
for i := 0; i < ref.Len(); i++ {
c.value = ref.Index(i).Interface()
ok, err := c.MatchField(v)
if err != nil {
return false, err
}
if ok {
return true, nil
}
}
return false, nil
}
type not struct {
children []Matcher
}
func (n *not) Match(i interface{}) (bool, error) {
v := reflect.Indirect(reflect.ValueOf(i))
return n.MatchValue(&v)
}
func (n *not) MatchValue(v *reflect.Value) (bool, error) {
var err error
for _, matcher := range n.children {
vm, ok := matcher.(ValueMatcher)
if ok {
ok, err = vm.MatchValue(v)
} else {
ok, err = matcher.Match(v.Interface())
}
if err != nil {
return false, err
}
if ok {
return false, nil
}
}
return true, nil
}
// Eq matcher, checks if the given field is equal to the given value
func Eq(field string, v interface{}) Matcher {
return NewFieldMatcher(field, &cmp{value: v, token: token.EQL})
}
// EqF matcher, checks if the given field is equal to the given field
func EqF(field1, field2 string) Matcher {
return NewField2FieldMatcher(field1, field2, token.EQL)
}
// StrictEq matcher, checks if the given field is deeply equal to the given value
func StrictEq(field string, v interface{}) Matcher {
return NewFieldMatcher(field, &strictEq{value: v})
}
// Gt matcher, checks if the given field is greater than the given value
func Gt(field string, v interface{}) Matcher {
return NewFieldMatcher(field, &cmp{value: v, token: token.GTR})
}
// GtF matcher, checks if the given field is greater than the given field
func GtF(field1, field2 string) Matcher {
return NewField2FieldMatcher(field1, field2, token.GTR)
}
// Gte matcher, checks if the given field is greater than or equal to the given value
func Gte(field string, v interface{}) Matcher {
return NewFieldMatcher(field, &cmp{value: v, token: token.GEQ})
}
// GteF matcher, checks if the given field is greater than or equal to the given field
func GteF(field1, field2 string) Matcher {
return NewField2FieldMatcher(field1, field2, token.GEQ)
}
// Lt matcher, checks if the given field is lesser than the given value
func Lt(field string, v interface{}) Matcher {
return NewFieldMatcher(field, &cmp{value: v, token: token.LSS})
}
// LtF matcher, checks if the given field is lesser than the given field
func LtF(field1, field2 string) Matcher {
return NewField2FieldMatcher(field1, field2, token.LSS)
}
// Lte matcher, checks if the given field is lesser than or equal to the given value
func Lte(field string, v interface{}) Matcher {
return NewFieldMatcher(field, &cmp{value: v, token: token.LEQ})
}
// LteF matcher, checks if the given field is lesser than or equal to the given field
func LteF(field1, field2 string) Matcher {
return NewField2FieldMatcher(field1, field2, token.LEQ)
}
// In matcher, checks if the given field matches one of the value of the given slice.
// v must be a slice.
func In(field string, v interface{}) Matcher {
return NewFieldMatcher(field, &in{list: v})
}
// True matcher, always returns true
func True() Matcher { return &trueMatcher{} }
// Or matcher, checks if at least one of the given matchers matches the record
func Or(matchers ...Matcher) Matcher { return &or{children: matchers} }
// And matcher, checks if all of the given matchers matches the record
func And(matchers ...Matcher) Matcher { return &and{children: matchers} }
// Not matcher, checks if all of the given matchers return false
func Not(matchers ...Matcher) Matcher { return &not{children: matchers} }

219
vendor/github.com/asdine/storm/query.go generated vendored Normal file
View File

@ -0,0 +1,219 @@
package storm
import (
"github.com/asdine/storm/internal"
"github.com/asdine/storm/q"
bolt "go.etcd.io/bbolt"
)
// Select a list of records that match a list of matchers. Doesn't use indexes.
func (n *node) Select(matchers ...q.Matcher) Query {
tree := q.And(matchers...)
return newQuery(n, tree)
}
// Query is the low level query engine used by Storm. It allows to operate searches through an entire bucket.
type Query interface {
// Skip matching records by the given number
Skip(int) Query
// Limit the results by the given number
Limit(int) Query
// Order by the given fields, in descending precedence, left-to-right.
OrderBy(...string) Query
// Reverse the order of the results
Reverse() Query
// Bucket specifies the bucket name
Bucket(string) Query
// Find a list of matching records
Find(interface{}) error
// First gets the first matching record
First(interface{}) error
// Delete all matching records
Delete(interface{}) error
// Count all the matching records
Count(interface{}) (int, error)
// Returns all the records without decoding them
Raw() ([][]byte, error)
// Execute the given function for each raw element
RawEach(func([]byte, []byte) error) error
// Execute the given function for each element
Each(interface{}, func(interface{}) error) error
}
func newQuery(n *node, tree q.Matcher) *query {
return &query{
skip: 0,
limit: -1,
node: n,
tree: tree,
}
}
type query struct {
limit int
skip int
reverse bool
tree q.Matcher
node *node
bucket string
orderBy []string
}
func (q *query) Skip(nb int) Query {
q.skip = nb
return q
}
func (q *query) Limit(nb int) Query {
q.limit = nb
return q
}
func (q *query) OrderBy(field ...string) Query {
q.orderBy = field
return q
}
func (q *query) Reverse() Query {
q.reverse = true
return q
}
func (q *query) Bucket(bucketName string) Query {
q.bucket = bucketName
return q
}
func (q *query) Find(to interface{}) error {
sink, err := newListSink(q.node, to)
if err != nil {
return err
}
return q.runQuery(sink)
}
func (q *query) First(to interface{}) error {
sink, err := newFirstSink(q.node, to)
if err != nil {
return err
}
q.limit = 1
return q.runQuery(sink)
}
func (q *query) Delete(kind interface{}) error {
sink, err := newDeleteSink(q.node, kind)
if err != nil {
return err
}
return q.runQuery(sink)
}
func (q *query) Count(kind interface{}) (int, error) {
sink, err := newCountSink(q.node, kind)
if err != nil {
return 0, err
}
err = q.runQuery(sink)
if err != nil {
return 0, err
}
return sink.counter, nil
}
func (q *query) Raw() ([][]byte, error) {
sink := newRawSink()
err := q.runQuery(sink)
if err != nil {
return nil, err
}
return sink.results, nil
}
func (q *query) RawEach(fn func([]byte, []byte) error) error {
sink := newRawSink()
sink.execFn = fn
return q.runQuery(sink)
}
func (q *query) Each(kind interface{}, fn func(interface{}) error) error {
sink, err := newEachSink(kind)
if err != nil {
return err
}
sink.execFn = fn
return q.runQuery(sink)
}
func (q *query) runQuery(sink sink) error {
if q.node.tx != nil {
return q.query(q.node.tx, sink)
}
if sink.readOnly() {
return q.node.s.Bolt.View(func(tx *bolt.Tx) error {
return q.query(tx, sink)
})
}
return q.node.s.Bolt.Update(func(tx *bolt.Tx) error {
return q.query(tx, sink)
})
}
func (q *query) query(tx *bolt.Tx, sink sink) error {
bucketName := q.bucket
if bucketName == "" {
bucketName = sink.bucketName()
}
bucket := q.node.GetBucket(tx, bucketName)
if q.limit == 0 {
return sink.flush()
}
sorter := newSorter(q.node, sink)
sorter.orderBy = q.orderBy
sorter.reverse = q.reverse
sorter.skip = q.skip
sorter.limit = q.limit
if bucket != nil {
c := internal.Cursor{C: bucket.Cursor(), Reverse: q.reverse}
for k, v := c.First(); k != nil; k, v = c.Next() {
if v == nil {
continue
}
stop, err := sorter.filter(q.tree, bucket, k, v)
if err != nil {
return err
}
if stop {
break
}
}
}
return sorter.flush()
}

99
vendor/github.com/asdine/storm/scan.go generated vendored Normal file
View File

@ -0,0 +1,99 @@
package storm
import (
"bytes"
bolt "go.etcd.io/bbolt"
)
// A BucketScanner scans a Node for a list of buckets
type BucketScanner interface {
// PrefixScan scans the root buckets for keys matching the given prefix.
PrefixScan(prefix string) []Node
// PrefixScan scans the buckets in this node for keys matching the given prefix.
RangeScan(min, max string) []Node
}
// PrefixScan scans the buckets in this node for keys matching the given prefix.
func (n *node) PrefixScan(prefix string) []Node {
if n.tx != nil {
return n.prefixScan(n.tx, prefix)
}
var nodes []Node
n.readTx(func(tx *bolt.Tx) error {
nodes = n.prefixScan(tx, prefix)
return nil
})
return nodes
}
func (n *node) prefixScan(tx *bolt.Tx, prefix string) []Node {
var (
prefixBytes = []byte(prefix)
nodes []Node
c = n.cursor(tx)
)
for k, v := c.Seek(prefixBytes); k != nil && bytes.HasPrefix(k, prefixBytes); k, v = c.Next() {
if v != nil {
continue
}
nodes = append(nodes, n.From(string(k)))
}
return nodes
}
// RangeScan scans the buckets in this node over a range such as a sortable time range.
func (n *node) RangeScan(min, max string) []Node {
if n.tx != nil {
return n.rangeScan(n.tx, min, max)
}
var nodes []Node
n.readTx(func(tx *bolt.Tx) error {
nodes = n.rangeScan(tx, min, max)
return nil
})
return nodes
}
func (n *node) rangeScan(tx *bolt.Tx, min, max string) []Node {
var (
minBytes = []byte(min)
maxBytes = []byte(max)
nodes []Node
c = n.cursor(tx)
)
for k, v := c.Seek(minBytes); k != nil && bytes.Compare(k, maxBytes) <= 0; k, v = c.Next() {
if v != nil {
continue
}
nodes = append(nodes, n.From(string(k)))
}
return nodes
}
func (n *node) cursor(tx *bolt.Tx) *bolt.Cursor {
var c *bolt.Cursor
if len(n.rootBucket) > 0 {
c = n.GetBucket(tx).Cursor()
} else {
c = tx.Cursor()
}
return c
}

608
vendor/github.com/asdine/storm/sink.go generated vendored Normal file
View File

@ -0,0 +1,608 @@
package storm
import (
"reflect"
"sort"
"github.com/asdine/storm/index"
"github.com/asdine/storm/q"
bolt "go.etcd.io/bbolt"
)
type item struct {
value *reflect.Value
bucket *bolt.Bucket
k []byte
v []byte
}
func newSorter(n Node, snk sink) *sorter {
return &sorter{
node: n,
sink: snk,
skip: 0,
limit: -1,
list: make([]*item, 0),
err: make(chan error),
done: make(chan struct{}),
}
}
type sorter struct {
node Node
sink sink
list []*item
skip int
limit int
orderBy []string
reverse bool
err chan error
done chan struct{}
}
func (s *sorter) filter(tree q.Matcher, bucket *bolt.Bucket, k, v []byte) (bool, error) {
itm := &item{
bucket: bucket,
k: k,
v: v,
}
rsink, ok := s.sink.(reflectSink)
if !ok {
return s.add(itm)
}
newElem := rsink.elem()
if err := s.node.Codec().Unmarshal(v, newElem.Interface()); err != nil {
return false, err
}
itm.value = &newElem
if tree != nil {
ok, err := tree.Match(newElem.Interface())
if err != nil {
return false, err
}
if !ok {
return false, nil
}
}
if len(s.orderBy) == 0 {
return s.add(itm)
}
if _, ok := s.sink.(sliceSink); ok {
// add directly to sink, we'll apply skip/limits after sorting
return false, s.sink.add(itm)
}
s.list = append(s.list, itm)
return false, nil
}
func (s *sorter) add(itm *item) (stop bool, err error) {
if s.limit == 0 {
return true, nil
}
if s.skip > 0 {
s.skip--
return false, nil
}
if s.limit > 0 {
s.limit--
}
err = s.sink.add(itm)
return s.limit == 0, err
}
func (s *sorter) compareValue(left reflect.Value, right reflect.Value) int {
if !left.IsValid() || !right.IsValid() {
if left.IsValid() {
return 1
}
return -1
}
switch left.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
l, r := left.Int(), right.Int()
if l < r {
return -1
}
if l > r {
return 1
}
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
l, r := left.Uint(), right.Uint()
if l < r {
return -1
}
if l > r {
return 1
}
case reflect.Float32, reflect.Float64:
l, r := left.Float(), right.Float()
if l < r {
return -1
}
if l > r {
return 1
}
case reflect.String:
l, r := left.String(), right.String()
if l < r {
return -1
}
if l > r {
return 1
}
default:
rawLeft, err := toBytes(left.Interface(), s.node.Codec())
if err != nil {
return -1
}
rawRight, err := toBytes(right.Interface(), s.node.Codec())
if err != nil {
return 1
}
l, r := string(rawLeft), string(rawRight)
if l < r {
return -1
}
if l > r {
return 1
}
}
return 0
}
func (s *sorter) less(leftElem reflect.Value, rightElem reflect.Value) bool {
for _, orderBy := range s.orderBy {
leftField := reflect.Indirect(leftElem).FieldByName(orderBy)
if !leftField.IsValid() {
s.err <- ErrNotFound
return false
}
rightField := reflect.Indirect(rightElem).FieldByName(orderBy)
if !rightField.IsValid() {
s.err <- ErrNotFound
return false
}
direction := 1
if s.reverse {
direction = -1
}
switch s.compareValue(leftField, rightField) * direction {
case -1:
return true
case 1:
return false
default:
continue
}
}
return false
}
func (s *sorter) flush() error {
if len(s.orderBy) == 0 {
return s.sink.flush()
}
go func() {
sort.Sort(s)
close(s.err)
}()
err := <-s.err
close(s.done)
if err != nil {
return err
}
if ssink, ok := s.sink.(sliceSink); ok {
if !ssink.slice().IsValid() {
return s.sink.flush()
}
if s.skip >= ssink.slice().Len() {
ssink.reset()
return s.sink.flush()
}
leftBound := s.skip
if leftBound < 0 {
leftBound = 0
}
limit := s.limit
if s.limit < 0 {
limit = 0
}
rightBound := leftBound + limit
if rightBound > ssink.slice().Len() || rightBound == leftBound {
rightBound = ssink.slice().Len()
}
ssink.setSlice(ssink.slice().Slice(leftBound, rightBound))
return s.sink.flush()
}
for _, itm := range s.list {
if itm == nil {
break
}
stop, err := s.add(itm)
if err != nil {
return err
}
if stop {
break
}
}
return s.sink.flush()
}
func (s *sorter) Len() int {
// skip if we encountered an earlier error
select {
case <-s.done:
return 0
default:
}
if ssink, ok := s.sink.(sliceSink); ok {
return ssink.slice().Len()
}
return len(s.list)
}
func (s *sorter) Less(i, j int) bool {
// skip if we encountered an earlier error
select {
case <-s.done:
return false
default:
}
if ssink, ok := s.sink.(sliceSink); ok {
return s.less(ssink.slice().Index(i), ssink.slice().Index(j))
}
return s.less(*s.list[i].value, *s.list[j].value)
}
type sink interface {
bucketName() string
flush() error
add(*item) error
readOnly() bool
}
type reflectSink interface {
elem() reflect.Value
}
type sliceSink interface {
slice() reflect.Value
setSlice(reflect.Value)
reset()
}
func newListSink(node Node, to interface{}) (*listSink, error) {
ref := reflect.ValueOf(to)
if ref.Kind() != reflect.Ptr || reflect.Indirect(ref).Kind() != reflect.Slice {
return nil, ErrSlicePtrNeeded
}
sliceType := reflect.Indirect(ref).Type()
elemType := sliceType.Elem()
if elemType.Kind() == reflect.Ptr {
elemType = elemType.Elem()
}
if elemType.Name() == "" {
return nil, ErrNoName
}
return &listSink{
node: node,
ref: ref,
isPtr: sliceType.Elem().Kind() == reflect.Ptr,
elemType: elemType,
name: elemType.Name(),
results: reflect.MakeSlice(reflect.Indirect(ref).Type(), 0, 0),
}, nil
}
type listSink struct {
node Node
ref reflect.Value
results reflect.Value
elemType reflect.Type
name string
isPtr bool
idx int
}
func (l *listSink) slice() reflect.Value {
return l.results
}
func (l *listSink) setSlice(s reflect.Value) {
l.results = s
}
func (l *listSink) reset() {
l.results = reflect.MakeSlice(reflect.Indirect(l.ref).Type(), 0, 0)
}
func (l *listSink) elem() reflect.Value {
if l.results.IsValid() && l.idx < l.results.Len() {
return l.results.Index(l.idx).Addr()
}
return reflect.New(l.elemType)
}
func (l *listSink) bucketName() string {
return l.name
}
func (l *listSink) add(i *item) error {
if l.idx == l.results.Len() {
if l.isPtr {
l.results = reflect.Append(l.results, *i.value)
} else {
l.results = reflect.Append(l.results, reflect.Indirect(*i.value))
}
}
l.idx++
return nil
}
func (l *listSink) flush() error {
if l.results.IsValid() && l.results.Len() > 0 {
reflect.Indirect(l.ref).Set(l.results)
return nil
}
return ErrNotFound
}
func (l *listSink) readOnly() bool {
return true
}
func newFirstSink(node Node, to interface{}) (*firstSink, error) {
ref := reflect.ValueOf(to)
if !ref.IsValid() || ref.Kind() != reflect.Ptr || ref.Elem().Kind() != reflect.Struct {
return nil, ErrStructPtrNeeded
}
return &firstSink{
node: node,
ref: ref,
}, nil
}
type firstSink struct {
node Node
ref reflect.Value
found bool
}
func (f *firstSink) elem() reflect.Value {
return reflect.New(reflect.Indirect(f.ref).Type())
}
func (f *firstSink) bucketName() string {
return reflect.Indirect(f.ref).Type().Name()
}
func (f *firstSink) add(i *item) error {
reflect.Indirect(f.ref).Set(i.value.Elem())
f.found = true
return nil
}
func (f *firstSink) flush() error {
if !f.found {
return ErrNotFound
}
return nil
}
func (f *firstSink) readOnly() bool {
return true
}
func newDeleteSink(node Node, kind interface{}) (*deleteSink, error) {
ref := reflect.ValueOf(kind)
if !ref.IsValid() || ref.Kind() != reflect.Ptr || ref.Elem().Kind() != reflect.Struct {
return nil, ErrStructPtrNeeded
}
return &deleteSink{
node: node,
ref: ref,
}, nil
}
type deleteSink struct {
node Node
ref reflect.Value
removed int
}
func (d *deleteSink) elem() reflect.Value {
return reflect.New(reflect.Indirect(d.ref).Type())
}
func (d *deleteSink) bucketName() string {
return reflect.Indirect(d.ref).Type().Name()
}
func (d *deleteSink) add(i *item) error {
info, err := extract(&d.ref)
if err != nil {
return err
}
for fieldName, fieldCfg := range info.Fields {
if fieldCfg.Index == "" {
continue
}
idx, err := getIndex(i.bucket, fieldCfg.Index, fieldName)
if err != nil {
return err
}
err = idx.RemoveID(i.k)
if err != nil {
if err == index.ErrNotFound {
return ErrNotFound
}
return err
}
}
d.removed++
return i.bucket.Delete(i.k)
}
func (d *deleteSink) flush() error {
if d.removed == 0 {
return ErrNotFound
}
return nil
}
func (d *deleteSink) readOnly() bool {
return false
}
func newCountSink(node Node, kind interface{}) (*countSink, error) {
ref := reflect.ValueOf(kind)
if !ref.IsValid() || ref.Kind() != reflect.Ptr || ref.Elem().Kind() != reflect.Struct {
return nil, ErrStructPtrNeeded
}
return &countSink{
node: node,
ref: ref,
}, nil
}
type countSink struct {
node Node
ref reflect.Value
counter int
}
func (c *countSink) elem() reflect.Value {
return reflect.New(reflect.Indirect(c.ref).Type())
}
func (c *countSink) bucketName() string {
return reflect.Indirect(c.ref).Type().Name()
}
func (c *countSink) add(i *item) error {
c.counter++
return nil
}
func (c *countSink) flush() error {
return nil
}
func (c *countSink) readOnly() bool {
return true
}
func newRawSink() *rawSink {
return &rawSink{}
}
type rawSink struct {
results [][]byte
execFn func([]byte, []byte) error
}
func (r *rawSink) add(i *item) error {
if r.execFn != nil {
err := r.execFn(i.k, i.v)
if err != nil {
return err
}
} else {
r.results = append(r.results, i.v)
}
return nil
}
func (r *rawSink) bucketName() string {
return ""
}
func (r *rawSink) flush() error {
return nil
}
func (r *rawSink) readOnly() bool {
return true
}
func newEachSink(to interface{}) (*eachSink, error) {
ref := reflect.ValueOf(to)
if !ref.IsValid() || ref.Kind() != reflect.Ptr || ref.Elem().Kind() != reflect.Struct {
return nil, ErrStructPtrNeeded
}
return &eachSink{
ref: ref,
}, nil
}
type eachSink struct {
ref reflect.Value
execFn func(interface{}) error
}
func (e *eachSink) elem() reflect.Value {
return reflect.New(reflect.Indirect(e.ref).Type())
}
func (e *eachSink) bucketName() string {
return reflect.Indirect(e.ref).Type().Name()
}
func (e *eachSink) add(i *item) error {
return e.execFn(i.value.Interface())
}
func (e *eachSink) flush() error {
return nil
}
func (e *eachSink) readOnly() bool {
return true
}

22
vendor/github.com/asdine/storm/sink_sorter_swap.go generated vendored Normal file
View File

@ -0,0 +1,22 @@
// +build !go1.8
package storm
import "reflect"
func (s *sorter) Swap(i, j int) {
// skip if we encountered an earlier error
select {
case <-s.done:
return
default:
}
if ssink, ok := s.sink.(sliceSink); ok {
x, y := ssink.slice().Index(i).Interface(), ssink.slice().Index(j).Interface()
ssink.slice().Index(i).Set(reflect.ValueOf(y))
ssink.slice().Index(j).Set(reflect.ValueOf(x))
} else {
s.list[i], s.list[j] = s.list[j], s.list[i]
}
}

View File

@ -0,0 +1,20 @@
// +build go1.8
package storm
import "reflect"
func (s *sorter) Swap(i, j int) {
// skip if we encountered an earlier error
select {
case <-s.done:
return
default:
}
if ssink, ok := s.sink.(sliceSink); ok {
reflect.Swapper(ssink.slice().Interface())(i, j)
} else {
s.list[i], s.list[j] = s.list[j], s.list[i]
}
}

425
vendor/github.com/asdine/storm/store.go generated vendored Normal file
View File

@ -0,0 +1,425 @@
package storm
import (
"bytes"
"reflect"
"github.com/asdine/storm/index"
"github.com/asdine/storm/q"
bolt "go.etcd.io/bbolt"
)
// TypeStore stores user defined types in BoltDB.
type TypeStore interface {
Finder
// Init creates the indexes and buckets for a given structure
Init(data interface{}) error
// ReIndex rebuilds all the indexes of a bucket
ReIndex(data interface{}) error
// Save a structure
Save(data interface{}) error
// Update a structure
Update(data interface{}) error
// UpdateField updates a single field
UpdateField(data interface{}, fieldName string, value interface{}) error
// Drop a bucket
Drop(data interface{}) error
// DeleteStruct deletes a structure from the associated bucket
DeleteStruct(data interface{}) error
}
// Init creates the indexes and buckets for a given structure
func (n *node) Init(data interface{}) error {
v := reflect.ValueOf(data)
cfg, err := extract(&v)
if err != nil {
return err
}
return n.readWriteTx(func(tx *bolt.Tx) error {
return n.init(tx, cfg)
})
}
func (n *node) init(tx *bolt.Tx, cfg *structConfig) error {
bucket, err := n.CreateBucketIfNotExists(tx, cfg.Name)
if err != nil {
return err
}
// save node configuration in the bucket
_, err = newMeta(bucket, n)
if err != nil {
return err
}
for fieldName, fieldCfg := range cfg.Fields {
if fieldCfg.Index == "" {
continue
}
switch fieldCfg.Index {
case tagUniqueIdx:
_, err = index.NewUniqueIndex(bucket, []byte(indexPrefix+fieldName))
case tagIdx:
_, err = index.NewListIndex(bucket, []byte(indexPrefix+fieldName))
default:
err = ErrIdxNotFound
}
if err != nil {
return err
}
}
return nil
}
func (n *node) ReIndex(data interface{}) error {
ref := reflect.ValueOf(data)
if !ref.IsValid() || ref.Kind() != reflect.Ptr || ref.Elem().Kind() != reflect.Struct {
return ErrStructPtrNeeded
}
cfg, err := extract(&ref)
if err != nil {
return err
}
return n.readWriteTx(func(tx *bolt.Tx) error {
return n.reIndex(tx, data, cfg)
})
}
func (n *node) reIndex(tx *bolt.Tx, data interface{}, cfg *structConfig) error {
root := n.WithTransaction(tx)
nodes := root.From(cfg.Name).PrefixScan(indexPrefix)
bucket := root.GetBucket(tx, cfg.Name)
if bucket == nil {
return ErrNotFound
}
for _, node := range nodes {
buckets := node.Bucket()
name := buckets[len(buckets)-1]
err := bucket.DeleteBucket([]byte(name))
if err != nil {
return err
}
}
total, err := root.Count(data)
if err != nil {
return err
}
for i := 0; i < total; i++ {
err = root.Select(q.True()).Skip(i).First(data)
if err != nil {
return err
}
err = root.Update(data)
if err != nil {
return err
}
}
return nil
}
// Save a structure
func (n *node) Save(data interface{}) error {
ref := reflect.ValueOf(data)
if !ref.IsValid() || ref.Kind() != reflect.Ptr || ref.Elem().Kind() != reflect.Struct {
return ErrStructPtrNeeded
}
cfg, err := extract(&ref)
if err != nil {
return err
}
if cfg.ID.IsZero {
if !cfg.ID.IsInteger || !cfg.ID.Increment {
return ErrZeroID
}
}
return n.readWriteTx(func(tx *bolt.Tx) error {
return n.save(tx, cfg, data, false)
})
}
func (n *node) save(tx *bolt.Tx, cfg *structConfig, data interface{}, update bool) error {
bucket, err := n.CreateBucketIfNotExists(tx, cfg.Name)
if err != nil {
return err
}
// save node configuration in the bucket
meta, err := newMeta(bucket, n)
if err != nil {
return err
}
if cfg.ID.IsZero {
err = meta.increment(cfg.ID)
if err != nil {
return err
}
}
id, err := toBytes(cfg.ID.Value.Interface(), n.codec)
if err != nil {
return err
}
for fieldName, fieldCfg := range cfg.Fields {
if !update && !fieldCfg.IsID && fieldCfg.Increment && fieldCfg.IsInteger && fieldCfg.IsZero {
err = meta.increment(fieldCfg)
if err != nil {
return err
}
}
if fieldCfg.Index == "" {
continue
}
idx, err := getIndex(bucket, fieldCfg.Index, fieldName)
if err != nil {
return err
}
if update && fieldCfg.IsZero && !fieldCfg.ForceUpdate {
continue
}
if fieldCfg.IsZero {
err = idx.RemoveID(id)
if err != nil {
return err
}
continue
}
value, err := toBytes(fieldCfg.Value.Interface(), n.codec)
if err != nil {
return err
}
var found bool
idsSaved, err := idx.All(value, nil)
if err != nil {
return err
}
for _, idSaved := range idsSaved {
if bytes.Compare(idSaved, id) == 0 {
found = true
break
}
}
if found {
continue
}
err = idx.RemoveID(id)
if err != nil {
return err
}
err = idx.Add(value, id)
if err != nil {
if err == index.ErrAlreadyExists {
return ErrAlreadyExists
}
return err
}
}
raw, err := n.codec.Marshal(data)
if err != nil {
return err
}
return bucket.Put(id, raw)
}
// Update a structure
func (n *node) Update(data interface{}) error {
return n.update(data, func(ref *reflect.Value, current *reflect.Value, cfg *structConfig) error {
numfield := ref.NumField()
for i := 0; i < numfield; i++ {
f := ref.Field(i)
if ref.Type().Field(i).PkgPath != "" {
continue
}
zero := reflect.Zero(f.Type()).Interface()
actual := f.Interface()
if !reflect.DeepEqual(actual, zero) {
cf := current.Field(i)
cf.Set(f)
idxInfo, ok := cfg.Fields[ref.Type().Field(i).Name]
if ok {
idxInfo.Value = &cf
}
}
}
return nil
})
}
// UpdateField updates a single field
func (n *node) UpdateField(data interface{}, fieldName string, value interface{}) error {
return n.update(data, func(ref *reflect.Value, current *reflect.Value, cfg *structConfig) error {
f := current.FieldByName(fieldName)
if !f.IsValid() {
return ErrNotFound
}
tf, _ := current.Type().FieldByName(fieldName)
if tf.PkgPath != "" {
return ErrNotFound
}
v := reflect.ValueOf(value)
if v.Kind() != f.Kind() {
return ErrIncompatibleValue
}
f.Set(v)
idxInfo, ok := cfg.Fields[fieldName]
if ok {
idxInfo.Value = &f
idxInfo.IsZero = isZero(idxInfo.Value)
idxInfo.ForceUpdate = true
}
return nil
})
}
func (n *node) update(data interface{}, fn func(*reflect.Value, *reflect.Value, *structConfig) error) error {
ref := reflect.ValueOf(data)
if !ref.IsValid() || ref.Kind() != reflect.Ptr || ref.Elem().Kind() != reflect.Struct {
return ErrStructPtrNeeded
}
cfg, err := extract(&ref)
if err != nil {
return err
}
if cfg.ID.IsZero {
return ErrNoID
}
current := reflect.New(reflect.Indirect(ref).Type())
return n.readWriteTx(func(tx *bolt.Tx) error {
err = n.WithTransaction(tx).One(cfg.ID.Name, cfg.ID.Value.Interface(), current.Interface())
if err != nil {
return err
}
ref := reflect.ValueOf(data).Elem()
cref := current.Elem()
err = fn(&ref, &cref, cfg)
if err != nil {
return err
}
return n.save(tx, cfg, current.Interface(), true)
})
}
// Drop a bucket
func (n *node) Drop(data interface{}) error {
var bucketName string
v := reflect.ValueOf(data)
if v.Kind() != reflect.String {
info, err := extract(&v)
if err != nil {
return err
}
bucketName = info.Name
} else {
bucketName = v.Interface().(string)
}
return n.readWriteTx(func(tx *bolt.Tx) error {
return n.drop(tx, bucketName)
})
}
func (n *node) drop(tx *bolt.Tx, bucketName string) error {
bucket := n.GetBucket(tx)
if bucket == nil {
return tx.DeleteBucket([]byte(bucketName))
}
return bucket.DeleteBucket([]byte(bucketName))
}
// DeleteStruct deletes a structure from the associated bucket
func (n *node) DeleteStruct(data interface{}) error {
ref := reflect.ValueOf(data)
if !ref.IsValid() || ref.Kind() != reflect.Ptr || ref.Elem().Kind() != reflect.Struct {
return ErrStructPtrNeeded
}
cfg, err := extract(&ref)
if err != nil {
return err
}
id, err := toBytes(cfg.ID.Value.Interface(), n.codec)
if err != nil {
return err
}
return n.readWriteTx(func(tx *bolt.Tx) error {
return n.deleteStruct(tx, cfg, id)
})
}
func (n *node) deleteStruct(tx *bolt.Tx, cfg *structConfig, id []byte) error {
bucket := n.GetBucket(tx, cfg.Name)
if bucket == nil {
return ErrNotFound
}
for fieldName, fieldCfg := range cfg.Fields {
if fieldCfg.Index == "" {
continue
}
idx, err := getIndex(bucket, fieldCfg.Index, fieldName)
if err != nil {
return err
}
err = idx.RemoveID(id)
if err != nil {
if err == index.ErrNotFound {
return ErrNotFound
}
return err
}
}
raw := bucket.Get(id)
if raw == nil {
return ErrNotFound
}
return bucket.Delete(id)
}

142
vendor/github.com/asdine/storm/storm.go generated vendored Normal file
View File

@ -0,0 +1,142 @@
package storm
import (
"bytes"
"encoding/binary"
"time"
"github.com/asdine/storm/codec"
"github.com/asdine/storm/codec/json"
bolt "go.etcd.io/bbolt"
)
const (
dbinfo = "__storm_db"
metadataBucket = "__storm_metadata"
)
// Defaults to json
var defaultCodec = json.Codec
// Open opens a database at the given path with optional Storm options.
func Open(path string, stormOptions ...func(*Options) error) (*DB, error) {
var err error
var opts Options
for _, option := range stormOptions {
if err = option(&opts); err != nil {
return nil, err
}
}
s := DB{
Bolt: opts.bolt,
}
n := node{
s: &s,
codec: opts.codec,
batchMode: opts.batchMode,
rootBucket: opts.rootBucket,
}
if n.codec == nil {
n.codec = defaultCodec
}
if opts.boltMode == 0 {
opts.boltMode = 0600
}
if opts.boltOptions == nil {
opts.boltOptions = &bolt.Options{Timeout: 1 * time.Second}
}
s.Node = &n
// skip if UseDB option is used
if s.Bolt == nil {
s.Bolt, err = bolt.Open(path, opts.boltMode, opts.boltOptions)
if err != nil {
return nil, err
}
}
err = s.checkVersion()
if err != nil {
return nil, err
}
return &s, nil
}
// DB is the wrapper around BoltDB. It contains an instance of BoltDB and uses it to perform all the
// needed operations
type DB struct {
// The root node that points to the root bucket.
Node
// Bolt is still easily accessible
Bolt *bolt.DB
}
// Close the database
func (s *DB) Close() error {
return s.Bolt.Close()
}
func (s *DB) checkVersion() error {
var v string
err := s.Get(dbinfo, "version", &v)
if err != nil && err != ErrNotFound {
return err
}
// for now, we only set the current version if it doesn't exist.
// v1 and v2 database files are compatible.
if v == "" {
return s.Set(dbinfo, "version", Version)
}
return nil
}
// toBytes turns an interface into a slice of bytes
func toBytes(key interface{}, codec codec.MarshalUnmarshaler) ([]byte, error) {
if key == nil {
return nil, nil
}
switch t := key.(type) {
case []byte:
return t, nil
case string:
return []byte(t), nil
case int:
return numbertob(int64(t))
case uint:
return numbertob(uint64(t))
case int8, int16, int32, int64, uint8, uint16, uint32, uint64:
return numbertob(t)
default:
return codec.Marshal(key)
}
}
func numbertob(v interface{}) ([]byte, error) {
var buf bytes.Buffer
err := binary.Write(&buf, binary.BigEndian, v)
if err != nil {
return nil, err
}
return buf.Bytes(), nil
}
func numberfromb(raw []byte) (int64, error) {
r := bytes.NewReader(raw)
var to int64
err := binary.Read(r, binary.BigEndian, &to)
if err != nil {
return 0, err
}
return to, nil
}

52
vendor/github.com/asdine/storm/transaction.go generated vendored Normal file
View File

@ -0,0 +1,52 @@
package storm
import bolt "go.etcd.io/bbolt"
// Tx is a transaction.
type Tx interface {
// Commit writes all changes to disk.
Commit() error
// Rollback closes the transaction and ignores all previous updates.
Rollback() error
}
// Begin starts a new transaction.
func (n node) Begin(writable bool) (Node, error) {
var err error
n.tx, err = n.s.Bolt.Begin(writable)
if err != nil {
return nil, err
}
return &n, nil
}
// Rollback closes the transaction and ignores all previous updates.
func (n *node) Rollback() error {
if n.tx == nil {
return ErrNotInTransaction
}
err := n.tx.Rollback()
if err == bolt.ErrTxClosed {
return ErrNotInTransaction
}
return err
}
// Commit writes all changes to disk.
func (n *node) Commit() error {
if n.tx == nil {
return ErrNotInTransaction
}
err := n.tx.Commit()
if err == bolt.ErrTxClosed {
return ErrNotInTransaction
}
return err
}

4
vendor/github.com/asdine/storm/version.go generated vendored Normal file
View File

@ -0,0 +1,4 @@
package storm
// Version of Storm
const Version = "2.0.0"

5
vendor/github.com/fatih/color/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,5 @@
language: go
go:
- 1.8.x
- tip

27
vendor/github.com/fatih/color/Gopkg.lock generated vendored Normal file
View File

@ -0,0 +1,27 @@
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.
[[projects]]
name = "github.com/mattn/go-colorable"
packages = ["."]
revision = "167de6bfdfba052fa6b2d3664c8f5272e23c9072"
version = "v0.0.9"
[[projects]]
name = "github.com/mattn/go-isatty"
packages = ["."]
revision = "0360b2af4f38e8d38c7fce2a9f4e702702d73a39"
version = "v0.0.3"
[[projects]]
branch = "master"
name = "golang.org/x/sys"
packages = ["unix"]
revision = "37707fdb30a5b38865cfb95e5aab41707daec7fd"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
inputs-digest = "e8a50671c3cb93ea935bf210b1cd20702876b9d9226129be581ef646d1565cdc"
solver-name = "gps-cdcl"
solver-version = 1

30
vendor/github.com/fatih/color/Gopkg.toml generated vendored Normal file
View File

@ -0,0 +1,30 @@
# Gopkg.toml example
#
# Refer to https://github.com/golang/dep/blob/master/docs/Gopkg.toml.md
# for detailed Gopkg.toml documentation.
#
# required = ["github.com/user/thing/cmd/thing"]
# ignored = ["github.com/user/project/pkgX", "bitbucket.org/user/project/pkgA/pkgY"]
#
# [[constraint]]
# name = "github.com/user/project"
# version = "1.0.0"
#
# [[constraint]]
# name = "github.com/user/project2"
# branch = "dev"
# source = "github.com/myfork/project2"
#
# [[override]]
# name = "github.com/x/y"
# version = "2.4.0"
[[constraint]]
name = "github.com/mattn/go-colorable"
version = "0.0.9"
[[constraint]]
name = "github.com/mattn/go-isatty"
version = "0.0.3"

20
vendor/github.com/fatih/color/LICENSE.md generated vendored Normal file
View File

@ -0,0 +1,20 @@
The MIT License (MIT)
Copyright (c) 2013 Fatih Arslan
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

179
vendor/github.com/fatih/color/README.md generated vendored Normal file
View File

@ -0,0 +1,179 @@
# Color [![GoDoc](https://godoc.org/github.com/fatih/color?status.svg)](https://godoc.org/github.com/fatih/color) [![Build Status](https://img.shields.io/travis/fatih/color.svg?style=flat-square)](https://travis-ci.org/fatih/color)
Color lets you use colorized outputs in terms of [ANSI Escape
Codes](http://en.wikipedia.org/wiki/ANSI_escape_code#Colors) in Go (Golang). It
has support for Windows too! The API can be used in several ways, pick one that
suits you.
![Color](https://i.imgur.com/c1JI0lA.png)
## Install
```bash
go get github.com/fatih/color
```
Note that the `vendor` folder is here for stability. Remove the folder if you
already have the dependencies in your GOPATH.
## Examples
### Standard colors
```go
// Print with default helper functions
color.Cyan("Prints text in cyan.")
// A newline will be appended automatically
color.Blue("Prints %s in blue.", "text")
// These are using the default foreground colors
color.Red("We have red")
color.Magenta("And many others ..")
```
### Mix and reuse colors
```go
// Create a new color object
c := color.New(color.FgCyan).Add(color.Underline)
c.Println("Prints cyan text with an underline.")
// Or just add them to New()
d := color.New(color.FgCyan, color.Bold)
d.Printf("This prints bold cyan %s\n", "too!.")
// Mix up foreground and background colors, create new mixes!
red := color.New(color.FgRed)
boldRed := red.Add(color.Bold)
boldRed.Println("This will print text in bold red.")
whiteBackground := red.Add(color.BgWhite)
whiteBackground.Println("Red text with white background.")
```
### Use your own output (io.Writer)
```go
// Use your own io.Writer output
color.New(color.FgBlue).Fprintln(myWriter, "blue color!")
blue := color.New(color.FgBlue)
blue.Fprint(writer, "This will print text in blue.")
```
### Custom print functions (PrintFunc)
```go
// Create a custom print function for convenience
red := color.New(color.FgRed).PrintfFunc()
red("Warning")
red("Error: %s", err)
// Mix up multiple attributes
notice := color.New(color.Bold, color.FgGreen).PrintlnFunc()
notice("Don't forget this...")
```
### Custom fprint functions (FprintFunc)
```go
blue := color.New(FgBlue).FprintfFunc()
blue(myWriter, "important notice: %s", stars)
// Mix up with multiple attributes
success := color.New(color.Bold, color.FgGreen).FprintlnFunc()
success(myWriter, "Don't forget this...")
```
### Insert into noncolor strings (SprintFunc)
```go
// Create SprintXxx functions to mix strings with other non-colorized strings:
yellow := color.New(color.FgYellow).SprintFunc()
red := color.New(color.FgRed).SprintFunc()
fmt.Printf("This is a %s and this is %s.\n", yellow("warning"), red("error"))
info := color.New(color.FgWhite, color.BgGreen).SprintFunc()
fmt.Printf("This %s rocks!\n", info("package"))
// Use helper functions
fmt.Println("This", color.RedString("warning"), "should be not neglected.")
fmt.Printf("%v %v\n", color.GreenString("Info:"), "an important message.")
// Windows supported too! Just don't forget to change the output to color.Output
fmt.Fprintf(color.Output, "Windows support: %s", color.GreenString("PASS"))
```
### Plug into existing code
```go
// Use handy standard colors
color.Set(color.FgYellow)
fmt.Println("Existing text will now be in yellow")
fmt.Printf("This one %s\n", "too")
color.Unset() // Don't forget to unset
// You can mix up parameters
color.Set(color.FgMagenta, color.Bold)
defer color.Unset() // Use it in your function
fmt.Println("All text will now be bold magenta.")
```
### Disable/Enable color
There might be a case where you want to explicitly disable/enable color output. the
`go-isatty` package will automatically disable color output for non-tty output streams
(for example if the output were piped directly to `less`)
`Color` has support to disable/enable colors both globally and for single color
definitions. For example suppose you have a CLI app and a `--no-color` bool flag. You
can easily disable the color output with:
```go
var flagNoColor = flag.Bool("no-color", false, "Disable color output")
if *flagNoColor {
color.NoColor = true // disables colorized output
}
```
It also has support for single color definitions (local). You can
disable/enable color output on the fly:
```go
c := color.New(color.FgCyan)
c.Println("Prints cyan text")
c.DisableColor()
c.Println("This is printed without any color")
c.EnableColor()
c.Println("This prints again cyan...")
```
## Todo
* Save/Return previous values
* Evaluate fmt.Formatter interface
## Credits
* [Fatih Arslan](https://github.com/fatih)
* Windows support via @mattn: [colorable](https://github.com/mattn/go-colorable)
## License
The MIT License (MIT) - see [`LICENSE.md`](https://github.com/fatih/color/blob/master/LICENSE.md) for more details

603
vendor/github.com/fatih/color/color.go generated vendored Normal file
View File

@ -0,0 +1,603 @@
package color
import (
"fmt"
"io"
"os"
"strconv"
"strings"
"sync"
"github.com/mattn/go-colorable"
"github.com/mattn/go-isatty"
)
var (
// NoColor defines if the output is colorized or not. It's dynamically set to
// false or true based on the stdout's file descriptor referring to a terminal
// or not. This is a global option and affects all colors. For more control
// over each color block use the methods DisableColor() individually.
NoColor = os.Getenv("TERM") == "dumb" ||
(!isatty.IsTerminal(os.Stdout.Fd()) && !isatty.IsCygwinTerminal(os.Stdout.Fd()))
// Output defines the standard output of the print functions. By default
// os.Stdout is used.
Output = colorable.NewColorableStdout()
// Error defines a color supporting writer for os.Stderr.
Error = colorable.NewColorableStderr()
// colorsCache is used to reduce the count of created Color objects and
// allows to reuse already created objects with required Attribute.
colorsCache = make(map[Attribute]*Color)
colorsCacheMu sync.Mutex // protects colorsCache
)
// Color defines a custom color object which is defined by SGR parameters.
type Color struct {
params []Attribute
noColor *bool
}
// Attribute defines a single SGR Code
type Attribute int
const escape = "\x1b"
// Base attributes
const (
Reset Attribute = iota
Bold
Faint
Italic
Underline
BlinkSlow
BlinkRapid
ReverseVideo
Concealed
CrossedOut
)
// Foreground text colors
const (
FgBlack Attribute = iota + 30
FgRed
FgGreen
FgYellow
FgBlue
FgMagenta
FgCyan
FgWhite
)
// Foreground Hi-Intensity text colors
const (
FgHiBlack Attribute = iota + 90
FgHiRed
FgHiGreen
FgHiYellow
FgHiBlue
FgHiMagenta
FgHiCyan
FgHiWhite
)
// Background text colors
const (
BgBlack Attribute = iota + 40
BgRed
BgGreen
BgYellow
BgBlue
BgMagenta
BgCyan
BgWhite
)
// Background Hi-Intensity text colors
const (
BgHiBlack Attribute = iota + 100
BgHiRed
BgHiGreen
BgHiYellow
BgHiBlue
BgHiMagenta
BgHiCyan
BgHiWhite
)
// New returns a newly created color object.
func New(value ...Attribute) *Color {
c := &Color{params: make([]Attribute, 0)}
c.Add(value...)
return c
}
// Set sets the given parameters immediately. It will change the color of
// output with the given SGR parameters until color.Unset() is called.
func Set(p ...Attribute) *Color {
c := New(p...)
c.Set()
return c
}
// Unset resets all escape attributes and clears the output. Usually should
// be called after Set().
func Unset() {
if NoColor {
return
}
fmt.Fprintf(Output, "%s[%dm", escape, Reset)
}
// Set sets the SGR sequence.
func (c *Color) Set() *Color {
if c.isNoColorSet() {
return c
}
fmt.Fprintf(Output, c.format())
return c
}
func (c *Color) unset() {
if c.isNoColorSet() {
return
}
Unset()
}
func (c *Color) setWriter(w io.Writer) *Color {
if c.isNoColorSet() {
return c
}
fmt.Fprintf(w, c.format())
return c
}
func (c *Color) unsetWriter(w io.Writer) {
if c.isNoColorSet() {
return
}
if NoColor {
return
}
fmt.Fprintf(w, "%s[%dm", escape, Reset)
}
// Add is used to chain SGR parameters. Use as many as parameters to combine
// and create custom color objects. Example: Add(color.FgRed, color.Underline).
func (c *Color) Add(value ...Attribute) *Color {
c.params = append(c.params, value...)
return c
}
func (c *Color) prepend(value Attribute) {
c.params = append(c.params, 0)
copy(c.params[1:], c.params[0:])
c.params[0] = value
}
// Fprint formats using the default formats for its operands and writes to w.
// Spaces are added between operands when neither is a string.
// It returns the number of bytes written and any write error encountered.
// On Windows, users should wrap w with colorable.NewColorable() if w is of
// type *os.File.
func (c *Color) Fprint(w io.Writer, a ...interface{}) (n int, err error) {
c.setWriter(w)
defer c.unsetWriter(w)
return fmt.Fprint(w, a...)
}
// Print formats using the default formats for its operands and writes to
// standard output. Spaces are added between operands when neither is a
// string. It returns the number of bytes written and any write error
// encountered. This is the standard fmt.Print() method wrapped with the given
// color.
func (c *Color) Print(a ...interface{}) (n int, err error) {
c.Set()
defer c.unset()
return fmt.Fprint(Output, a...)
}
// Fprintf formats according to a format specifier and writes to w.
// It returns the number of bytes written and any write error encountered.
// On Windows, users should wrap w with colorable.NewColorable() if w is of
// type *os.File.
func (c *Color) Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error) {
c.setWriter(w)
defer c.unsetWriter(w)
return fmt.Fprintf(w, format, a...)
}
// Printf formats according to a format specifier and writes to standard output.
// It returns the number of bytes written and any write error encountered.
// This is the standard fmt.Printf() method wrapped with the given color.
func (c *Color) Printf(format string, a ...interface{}) (n int, err error) {
c.Set()
defer c.unset()
return fmt.Fprintf(Output, format, a...)
}
// Fprintln formats using the default formats for its operands and writes to w.
// Spaces are always added between operands and a newline is appended.
// On Windows, users should wrap w with colorable.NewColorable() if w is of
// type *os.File.
func (c *Color) Fprintln(w io.Writer, a ...interface{}) (n int, err error) {
c.setWriter(w)
defer c.unsetWriter(w)
return fmt.Fprintln(w, a...)
}
// Println formats using the default formats for its operands and writes to
// standard output. Spaces are always added between operands and a newline is
// appended. It returns the number of bytes written and any write error
// encountered. This is the standard fmt.Print() method wrapped with the given
// color.
func (c *Color) Println(a ...interface{}) (n int, err error) {
c.Set()
defer c.unset()
return fmt.Fprintln(Output, a...)
}
// Sprint is just like Print, but returns a string instead of printing it.
func (c *Color) Sprint(a ...interface{}) string {
return c.wrap(fmt.Sprint(a...))
}
// Sprintln is just like Println, but returns a string instead of printing it.
func (c *Color) Sprintln(a ...interface{}) string {
return c.wrap(fmt.Sprintln(a...))
}
// Sprintf is just like Printf, but returns a string instead of printing it.
func (c *Color) Sprintf(format string, a ...interface{}) string {
return c.wrap(fmt.Sprintf(format, a...))
}
// FprintFunc returns a new function that prints the passed arguments as
// colorized with color.Fprint().
func (c *Color) FprintFunc() func(w io.Writer, a ...interface{}) {
return func(w io.Writer, a ...interface{}) {
c.Fprint(w, a...)
}
}
// PrintFunc returns a new function that prints the passed arguments as
// colorized with color.Print().
func (c *Color) PrintFunc() func(a ...interface{}) {
return func(a ...interface{}) {
c.Print(a...)
}
}
// FprintfFunc returns a new function that prints the passed arguments as
// colorized with color.Fprintf().
func (c *Color) FprintfFunc() func(w io.Writer, format string, a ...interface{}) {
return func(w io.Writer, format string, a ...interface{}) {
c.Fprintf(w, format, a...)
}
}
// PrintfFunc returns a new function that prints the passed arguments as
// colorized with color.Printf().
func (c *Color) PrintfFunc() func(format string, a ...interface{}) {
return func(format string, a ...interface{}) {
c.Printf(format, a...)
}
}
// FprintlnFunc returns a new function that prints the passed arguments as
// colorized with color.Fprintln().
func (c *Color) FprintlnFunc() func(w io.Writer, a ...interface{}) {
return func(w io.Writer, a ...interface{}) {
c.Fprintln(w, a...)
}
}
// PrintlnFunc returns a new function that prints the passed arguments as
// colorized with color.Println().
func (c *Color) PrintlnFunc() func(a ...interface{}) {
return func(a ...interface{}) {
c.Println(a...)
}
}
// SprintFunc returns a new function that returns colorized strings for the
// given arguments with fmt.Sprint(). Useful to put into or mix into other
// string. Windows users should use this in conjunction with color.Output, example:
//
// put := New(FgYellow).SprintFunc()
// fmt.Fprintf(color.Output, "This is a %s", put("warning"))
func (c *Color) SprintFunc() func(a ...interface{}) string {
return func(a ...interface{}) string {
return c.wrap(fmt.Sprint(a...))
}
}
// SprintfFunc returns a new function that returns colorized strings for the
// given arguments with fmt.Sprintf(). Useful to put into or mix into other
// string. Windows users should use this in conjunction with color.Output.
func (c *Color) SprintfFunc() func(format string, a ...interface{}) string {
return func(format string, a ...interface{}) string {
return c.wrap(fmt.Sprintf(format, a...))
}
}
// SprintlnFunc returns a new function that returns colorized strings for the
// given arguments with fmt.Sprintln(). Useful to put into or mix into other
// string. Windows users should use this in conjunction with color.Output.
func (c *Color) SprintlnFunc() func(a ...interface{}) string {
return func(a ...interface{}) string {
return c.wrap(fmt.Sprintln(a...))
}
}
// sequence returns a formatted SGR sequence to be plugged into a "\x1b[...m"
// an example output might be: "1;36" -> bold cyan
func (c *Color) sequence() string {
format := make([]string, len(c.params))
for i, v := range c.params {
format[i] = strconv.Itoa(int(v))
}
return strings.Join(format, ";")
}
// wrap wraps the s string with the colors attributes. The string is ready to
// be printed.
func (c *Color) wrap(s string) string {
if c.isNoColorSet() {
return s
}
return c.format() + s + c.unformat()
}
func (c *Color) format() string {
return fmt.Sprintf("%s[%sm", escape, c.sequence())
}
func (c *Color) unformat() string {
return fmt.Sprintf("%s[%dm", escape, Reset)
}
// DisableColor disables the color output. Useful to not change any existing
// code and still being able to output. Can be used for flags like
// "--no-color". To enable back use EnableColor() method.
func (c *Color) DisableColor() {
c.noColor = boolPtr(true)
}
// EnableColor enables the color output. Use it in conjunction with
// DisableColor(). Otherwise this method has no side effects.
func (c *Color) EnableColor() {
c.noColor = boolPtr(false)
}
func (c *Color) isNoColorSet() bool {
// check first if we have user setted action
if c.noColor != nil {
return *c.noColor
}
// if not return the global option, which is disabled by default
return NoColor
}
// Equals returns a boolean value indicating whether two colors are equal.
func (c *Color) Equals(c2 *Color) bool {
if len(c.params) != len(c2.params) {
return false
}
for _, attr := range c.params {
if !c2.attrExists(attr) {
return false
}
}
return true
}
func (c *Color) attrExists(a Attribute) bool {
for _, attr := range c.params {
if attr == a {
return true
}
}
return false
}
func boolPtr(v bool) *bool {
return &v
}
func getCachedColor(p Attribute) *Color {
colorsCacheMu.Lock()
defer colorsCacheMu.Unlock()
c, ok := colorsCache[p]
if !ok {
c = New(p)
colorsCache[p] = c
}
return c
}
func colorPrint(format string, p Attribute, a ...interface{}) {
c := getCachedColor(p)
if !strings.HasSuffix(format, "\n") {
format += "\n"
}
if len(a) == 0 {
c.Print(format)
} else {
c.Printf(format, a...)
}
}
func colorString(format string, p Attribute, a ...interface{}) string {
c := getCachedColor(p)
if len(a) == 0 {
return c.SprintFunc()(format)
}
return c.SprintfFunc()(format, a...)
}
// Black is a convenient helper function to print with black foreground. A
// newline is appended to format by default.
func Black(format string, a ...interface{}) { colorPrint(format, FgBlack, a...) }
// Red is a convenient helper function to print with red foreground. A
// newline is appended to format by default.
func Red(format string, a ...interface{}) { colorPrint(format, FgRed, a...) }
// Green is a convenient helper function to print with green foreground. A
// newline is appended to format by default.
func Green(format string, a ...interface{}) { colorPrint(format, FgGreen, a...) }
// Yellow is a convenient helper function to print with yellow foreground.
// A newline is appended to format by default.
func Yellow(format string, a ...interface{}) { colorPrint(format, FgYellow, a...) }
// Blue is a convenient helper function to print with blue foreground. A
// newline is appended to format by default.
func Blue(format string, a ...interface{}) { colorPrint(format, FgBlue, a...) }
// Magenta is a convenient helper function to print with magenta foreground.
// A newline is appended to format by default.
func Magenta(format string, a ...interface{}) { colorPrint(format, FgMagenta, a...) }
// Cyan is a convenient helper function to print with cyan foreground. A
// newline is appended to format by default.
func Cyan(format string, a ...interface{}) { colorPrint(format, FgCyan, a...) }
// White is a convenient helper function to print with white foreground. A
// newline is appended to format by default.
func White(format string, a ...interface{}) { colorPrint(format, FgWhite, a...) }
// BlackString is a convenient helper function to return a string with black
// foreground.
func BlackString(format string, a ...interface{}) string { return colorString(format, FgBlack, a...) }
// RedString is a convenient helper function to return a string with red
// foreground.
func RedString(format string, a ...interface{}) string { return colorString(format, FgRed, a...) }
// GreenString is a convenient helper function to return a string with green
// foreground.
func GreenString(format string, a ...interface{}) string { return colorString(format, FgGreen, a...) }
// YellowString is a convenient helper function to return a string with yellow
// foreground.
func YellowString(format string, a ...interface{}) string { return colorString(format, FgYellow, a...) }
// BlueString is a convenient helper function to return a string with blue
// foreground.
func BlueString(format string, a ...interface{}) string { return colorString(format, FgBlue, a...) }
// MagentaString is a convenient helper function to return a string with magenta
// foreground.
func MagentaString(format string, a ...interface{}) string {
return colorString(format, FgMagenta, a...)
}
// CyanString is a convenient helper function to return a string with cyan
// foreground.
func CyanString(format string, a ...interface{}) string { return colorString(format, FgCyan, a...) }
// WhiteString is a convenient helper function to return a string with white
// foreground.
func WhiteString(format string, a ...interface{}) string { return colorString(format, FgWhite, a...) }
// HiBlack is a convenient helper function to print with hi-intensity black foreground. A
// newline is appended to format by default.
func HiBlack(format string, a ...interface{}) { colorPrint(format, FgHiBlack, a...) }
// HiRed is a convenient helper function to print with hi-intensity red foreground. A
// newline is appended to format by default.
func HiRed(format string, a ...interface{}) { colorPrint(format, FgHiRed, a...) }
// HiGreen is a convenient helper function to print with hi-intensity green foreground. A
// newline is appended to format by default.
func HiGreen(format string, a ...interface{}) { colorPrint(format, FgHiGreen, a...) }
// HiYellow is a convenient helper function to print with hi-intensity yellow foreground.
// A newline is appended to format by default.
func HiYellow(format string, a ...interface{}) { colorPrint(format, FgHiYellow, a...) }
// HiBlue is a convenient helper function to print with hi-intensity blue foreground. A
// newline is appended to format by default.
func HiBlue(format string, a ...interface{}) { colorPrint(format, FgHiBlue, a...) }
// HiMagenta is a convenient helper function to print with hi-intensity magenta foreground.
// A newline is appended to format by default.
func HiMagenta(format string, a ...interface{}) { colorPrint(format, FgHiMagenta, a...) }
// HiCyan is a convenient helper function to print with hi-intensity cyan foreground. A
// newline is appended to format by default.
func HiCyan(format string, a ...interface{}) { colorPrint(format, FgHiCyan, a...) }
// HiWhite is a convenient helper function to print with hi-intensity white foreground. A
// newline is appended to format by default.
func HiWhite(format string, a ...interface{}) { colorPrint(format, FgHiWhite, a...) }
// HiBlackString is a convenient helper function to return a string with hi-intensity black
// foreground.
func HiBlackString(format string, a ...interface{}) string {
return colorString(format, FgHiBlack, a...)
}
// HiRedString is a convenient helper function to return a string with hi-intensity red
// foreground.
func HiRedString(format string, a ...interface{}) string { return colorString(format, FgHiRed, a...) }
// HiGreenString is a convenient helper function to return a string with hi-intensity green
// foreground.
func HiGreenString(format string, a ...interface{}) string {
return colorString(format, FgHiGreen, a...)
}
// HiYellowString is a convenient helper function to return a string with hi-intensity yellow
// foreground.
func HiYellowString(format string, a ...interface{}) string {
return colorString(format, FgHiYellow, a...)
}
// HiBlueString is a convenient helper function to return a string with hi-intensity blue
// foreground.
func HiBlueString(format string, a ...interface{}) string { return colorString(format, FgHiBlue, a...) }
// HiMagentaString is a convenient helper function to return a string with hi-intensity magenta
// foreground.
func HiMagentaString(format string, a ...interface{}) string {
return colorString(format, FgHiMagenta, a...)
}
// HiCyanString is a convenient helper function to return a string with hi-intensity cyan
// foreground.
func HiCyanString(format string, a ...interface{}) string { return colorString(format, FgHiCyan, a...) }
// HiWhiteString is a convenient helper function to return a string with hi-intensity white
// foreground.
func HiWhiteString(format string, a ...interface{}) string {
return colorString(format, FgHiWhite, a...)
}

133
vendor/github.com/fatih/color/doc.go generated vendored Normal file
View File

@ -0,0 +1,133 @@
/*
Package color is an ANSI color package to output colorized or SGR defined
output to the standard output. The API can be used in several way, pick one
that suits you.
Use simple and default helper functions with predefined foreground colors:
color.Cyan("Prints text in cyan.")
// a newline will be appended automatically
color.Blue("Prints %s in blue.", "text")
// More default foreground colors..
color.Red("We have red")
color.Yellow("Yellow color too!")
color.Magenta("And many others ..")
// Hi-intensity colors
color.HiGreen("Bright green color.")
color.HiBlack("Bright black means gray..")
color.HiWhite("Shiny white color!")
However there are times where custom color mixes are required. Below are some
examples to create custom color objects and use the print functions of each
separate color object.
// Create a new color object
c := color.New(color.FgCyan).Add(color.Underline)
c.Println("Prints cyan text with an underline.")
// Or just add them to New()
d := color.New(color.FgCyan, color.Bold)
d.Printf("This prints bold cyan %s\n", "too!.")
// Mix up foreground and background colors, create new mixes!
red := color.New(color.FgRed)
boldRed := red.Add(color.Bold)
boldRed.Println("This will print text in bold red.")
whiteBackground := red.Add(color.BgWhite)
whiteBackground.Println("Red text with White background.")
// Use your own io.Writer output
color.New(color.FgBlue).Fprintln(myWriter, "blue color!")
blue := color.New(color.FgBlue)
blue.Fprint(myWriter, "This will print text in blue.")
You can create PrintXxx functions to simplify even more:
// Create a custom print function for convenient
red := color.New(color.FgRed).PrintfFunc()
red("warning")
red("error: %s", err)
// Mix up multiple attributes
notice := color.New(color.Bold, color.FgGreen).PrintlnFunc()
notice("don't forget this...")
You can also FprintXxx functions to pass your own io.Writer:
blue := color.New(FgBlue).FprintfFunc()
blue(myWriter, "important notice: %s", stars)
// Mix up with multiple attributes
success := color.New(color.Bold, color.FgGreen).FprintlnFunc()
success(myWriter, don't forget this...")
Or create SprintXxx functions to mix strings with other non-colorized strings:
yellow := New(FgYellow).SprintFunc()
red := New(FgRed).SprintFunc()
fmt.Printf("this is a %s and this is %s.\n", yellow("warning"), red("error"))
info := New(FgWhite, BgGreen).SprintFunc()
fmt.Printf("this %s rocks!\n", info("package"))
Windows support is enabled by default. All Print functions work as intended.
However only for color.SprintXXX functions, user should use fmt.FprintXXX and
set the output to color.Output:
fmt.Fprintf(color.Output, "Windows support: %s", color.GreenString("PASS"))
info := New(FgWhite, BgGreen).SprintFunc()
fmt.Fprintf(color.Output, "this %s rocks!\n", info("package"))
Using with existing code is possible. Just use the Set() method to set the
standard output to the given parameters. That way a rewrite of an existing
code is not required.
// Use handy standard colors.
color.Set(color.FgYellow)
fmt.Println("Existing text will be now in Yellow")
fmt.Printf("This one %s\n", "too")
color.Unset() // don't forget to unset
// You can mix up parameters
color.Set(color.FgMagenta, color.Bold)
defer color.Unset() // use it in your function
fmt.Println("All text will be now bold magenta.")
There might be a case where you want to disable color output (for example to
pipe the standard output of your app to somewhere else). `Color` has support to
disable colors both globally and for single color definition. For example
suppose you have a CLI app and a `--no-color` bool flag. You can easily disable
the color output with:
var flagNoColor = flag.Bool("no-color", false, "Disable color output")
if *flagNoColor {
color.NoColor = true // disables colorized output
}
It also has support for single color definitions (local). You can
disable/enable color output on the fly:
c := color.New(color.FgCyan)
c.Println("Prints cyan text")
c.DisableColor()
c.Println("This is printed without any color")
c.EnableColor()
c.Println("This prints again cyan...")
*/
package color

8
vendor/github.com/go-ole/go-ole/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,8 @@
language: go
sudo: false
go:
- 1.9.x
- 1.10.x
- 1.11.x
- tip

49
vendor/github.com/go-ole/go-ole/ChangeLog.md generated vendored Normal file
View File

@ -0,0 +1,49 @@
# Version 1.x.x
* **Add more test cases and reference new test COM server project.** (Placeholder for future additions)
# Version 1.2.0-alphaX
**Minimum supported version is now Go 1.4. Go 1.1 support is deprecated, but should still build.**
* Added CI configuration for Travis-CI and AppVeyor.
* Added test InterfaceID and ClassID for the COM Test Server project.
* Added more inline documentation (#83).
* Added IEnumVARIANT implementation (#88).
* Added IEnumVARIANT test cases (#99, #100, #101).
* Added support for retrieving `time.Time` from VARIANT (#92).
* Added test case for IUnknown (#64).
* Added test case for IDispatch (#64).
* Added test cases for scalar variants (#64, #76).
# Version 1.1.1
* Fixes for Linux build.
* Fixes for Windows build.
# Version 1.1.0
The change to provide building on all platforms is a new feature. The increase in minor version reflects that and allows those who wish to stay on 1.0.x to continue to do so. Support for 1.0.x will be limited to bug fixes.
* Move GUID out of variables.go into its own file to make new documentation available.
* Move OleError out of ole.go into its own file to make new documentation available.
* Add documentation to utility functions.
* Add documentation to variant receiver functions.
* Add documentation to ole structures.
* Make variant available to other systems outside of Windows.
* Make OLE structures available to other systems outside of Windows.
## New Features
* Library should now be built on all platforms supported by Go. Library will NOOP on any platform that is not Windows.
* More functions are now documented and available on godoc.org.
# Version 1.0.1
1. Fix package references from repository location change.
# Version 1.0.0
This version is stable enough for use. The COM API is still incomplete, but provides enough functionality for accessing COM servers using IDispatch interface.
There is no changelog for this version. Check commits for history.

21
vendor/github.com/go-ole/go-ole/LICENSE generated vendored Normal file
View File

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright © 2013-2017 Yasuhiro Matsumoto, <mattn.jp@gmail.com>
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the “Software”), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

46
vendor/github.com/go-ole/go-ole/README.md generated vendored Normal file
View File

@ -0,0 +1,46 @@
# Go OLE
[![Build status](https://ci.appveyor.com/api/projects/status/qr0u2sf7q43us9fj?svg=true)](https://ci.appveyor.com/project/jacobsantos/go-ole-jgs28)
[![Build Status](https://travis-ci.org/go-ole/go-ole.svg?branch=master)](https://travis-ci.org/go-ole/go-ole)
[![GoDoc](https://godoc.org/github.com/go-ole/go-ole?status.svg)](https://godoc.org/github.com/go-ole/go-ole)
Go bindings for Windows COM using shared libraries instead of cgo.
By Yasuhiro Matsumoto.
## Install
To experiment with go-ole, you can just compile and run the example program:
```
go get github.com/go-ole/go-ole
cd /path/to/go-ole/
go test
cd /path/to/go-ole/example/excel
go run excel.go
```
## Continuous Integration
Continuous integration configuration has been added for both Travis-CI and AppVeyor. You will have to add these to your own account for your fork in order for it to run.
**Travis-CI**
Travis-CI was added to check builds on Linux to ensure that `go get` works when cross building. Currently, Travis-CI is not used to test cross-building, but this may be changed in the future. It is also not currently possible to test the library on Linux, since COM API is specific to Windows and it is not currently possible to run a COM server on Linux or even connect to a remote COM server.
**AppVeyor**
AppVeyor is used to build on Windows using the (in-development) test COM server. It is currently only used to test the build and ensure that the code works on Windows. It will be used to register a COM server and then run the test cases based on the test COM server.
The tests currently do run and do pass and this should be maintained with commits.
## Versioning
Go OLE uses [semantic versioning](http://semver.org) for version numbers, which is similar to the version contract of the Go language. Which means that the major version will always maintain backwards compatibility with minor versions. Minor versions will only add new additions and changes. Fixes will always be in patch.
This contract should allow you to upgrade to new minor and patch versions without breakage or modifications to your existing code. Leave a ticket, if there is breakage, so that it could be fixed.
## LICENSE
Under the MIT License: http://mattn.mit-license.org/2013

54
vendor/github.com/go-ole/go-ole/appveyor.yml generated vendored Normal file
View File

@ -0,0 +1,54 @@
# Notes:
# - Minimal appveyor.yml file is an empty file. All sections are optional.
# - Indent each level of configuration with 2 spaces. Do not use tabs!
# - All section names are case-sensitive.
# - Section names should be unique on each level.
version: "1.3.0.{build}-alpha-{branch}"
os: Windows Server 2012 R2
branches:
only:
- master
- v1.2
- v1.1
- v1.0
skip_tags: true
clone_folder: c:\gopath\src\github.com\go-ole\go-ole
environment:
GOPATH: c:\gopath
matrix:
- GOARCH: amd64
GOVERSION: 1.5
GOROOT: c:\go
DOWNLOADPLATFORM: "x64"
install:
- choco install mingw
- SET PATH=c:\tools\mingw64\bin;%PATH%
# - Download COM Server
- ps: Start-FileDownload "https://github.com/go-ole/test-com-server/releases/download/v1.0.2/test-com-server-${env:DOWNLOADPLATFORM}.zip"
- 7z e test-com-server-%DOWNLOADPLATFORM%.zip -oc:\gopath\src\github.com\go-ole\go-ole > NUL
- c:\gopath\src\github.com\go-ole\go-ole\build\register-assembly.bat
# - set
- go version
- go env
- go get -u golang.org/x/tools/cmd/cover
- go get -u golang.org/x/tools/cmd/godoc
- go get -u golang.org/x/tools/cmd/stringer
build_script:
- cd c:\gopath\src\github.com\go-ole\go-ole
- go get -v -t ./...
- go build
- go test -v -cover ./...
# disable automatic tests
test: off
# disable deployment
deploy: off

344
vendor/github.com/go-ole/go-ole/com.go generated vendored Normal file
View File

@ -0,0 +1,344 @@
// +build windows
package ole
import (
"syscall"
"unicode/utf16"
"unsafe"
)
var (
procCoInitialize, _ = modole32.FindProc("CoInitialize")
procCoInitializeEx, _ = modole32.FindProc("CoInitializeEx")
procCoUninitialize, _ = modole32.FindProc("CoUninitialize")
procCoCreateInstance, _ = modole32.FindProc("CoCreateInstance")
procCoTaskMemFree, _ = modole32.FindProc("CoTaskMemFree")
procCLSIDFromProgID, _ = modole32.FindProc("CLSIDFromProgID")
procCLSIDFromString, _ = modole32.FindProc("CLSIDFromString")
procStringFromCLSID, _ = modole32.FindProc("StringFromCLSID")
procStringFromIID, _ = modole32.FindProc("StringFromIID")
procIIDFromString, _ = modole32.FindProc("IIDFromString")
procCoGetObject, _ = modole32.FindProc("CoGetObject")
procGetUserDefaultLCID, _ = modkernel32.FindProc("GetUserDefaultLCID")
procCopyMemory, _ = modkernel32.FindProc("RtlMoveMemory")
procVariantInit, _ = modoleaut32.FindProc("VariantInit")
procVariantClear, _ = modoleaut32.FindProc("VariantClear")
procVariantTimeToSystemTime, _ = modoleaut32.FindProc("VariantTimeToSystemTime")
procSysAllocString, _ = modoleaut32.FindProc("SysAllocString")
procSysAllocStringLen, _ = modoleaut32.FindProc("SysAllocStringLen")
procSysFreeString, _ = modoleaut32.FindProc("SysFreeString")
procSysStringLen, _ = modoleaut32.FindProc("SysStringLen")
procCreateDispTypeInfo, _ = modoleaut32.FindProc("CreateDispTypeInfo")
procCreateStdDispatch, _ = modoleaut32.FindProc("CreateStdDispatch")
procGetActiveObject, _ = modoleaut32.FindProc("GetActiveObject")
procGetMessageW, _ = moduser32.FindProc("GetMessageW")
procDispatchMessageW, _ = moduser32.FindProc("DispatchMessageW")
)
// coInitialize initializes COM library on current thread.
//
// MSDN documentation suggests that this function should not be called. Call
// CoInitializeEx() instead. The reason has to do with threading and this
// function is only for single-threaded apartments.
//
// That said, most users of the library have gotten away with just this
// function. If you are experiencing threading issues, then use
// CoInitializeEx().
func coInitialize() (err error) {
// http://msdn.microsoft.com/en-us/library/windows/desktop/ms678543(v=vs.85).aspx
// Suggests that no value should be passed to CoInitialized.
// Could just be Call() since the parameter is optional. <-- Needs testing to be sure.
hr, _, _ := procCoInitialize.Call(uintptr(0))
if hr != 0 {
err = NewError(hr)
}
return
}
// coInitializeEx initializes COM library with concurrency model.
func coInitializeEx(coinit uint32) (err error) {
// http://msdn.microsoft.com/en-us/library/windows/desktop/ms695279(v=vs.85).aspx
// Suggests that the first parameter is not only optional but should always be NULL.
hr, _, _ := procCoInitializeEx.Call(uintptr(0), uintptr(coinit))
if hr != 0 {
err = NewError(hr)
}
return
}
// CoInitialize initializes COM library on current thread.
//
// MSDN documentation suggests that this function should not be called. Call
// CoInitializeEx() instead. The reason has to do with threading and this
// function is only for single-threaded apartments.
//
// That said, most users of the library have gotten away with just this
// function. If you are experiencing threading issues, then use
// CoInitializeEx().
func CoInitialize(p uintptr) (err error) {
// p is ignored and won't be used.
// Avoid any variable not used errors.
p = uintptr(0)
return coInitialize()
}
// CoInitializeEx initializes COM library with concurrency model.
func CoInitializeEx(p uintptr, coinit uint32) (err error) {
// Avoid any variable not used errors.
p = uintptr(0)
return coInitializeEx(coinit)
}
// CoUninitialize uninitializes COM Library.
func CoUninitialize() {
procCoUninitialize.Call()
}
// CoTaskMemFree frees memory pointer.
func CoTaskMemFree(memptr uintptr) {
procCoTaskMemFree.Call(memptr)
}
// CLSIDFromProgID retrieves Class Identifier with the given Program Identifier.
//
// The Programmatic Identifier must be registered, because it will be looked up
// in the Windows Registry. The registry entry has the following keys: CLSID,
// Insertable, Protocol and Shell
// (https://msdn.microsoft.com/en-us/library/dd542719(v=vs.85).aspx).
//
// programID identifies the class id with less precision and is not guaranteed
// to be unique. These are usually found in the registry under
// HKEY_LOCAL_MACHINE\SOFTWARE\Classes, usually with the format of
// "Program.Component.Version" with version being optional.
//
// CLSIDFromProgID in Windows API.
func CLSIDFromProgID(progId string) (clsid *GUID, err error) {
var guid GUID
lpszProgID := uintptr(unsafe.Pointer(syscall.StringToUTF16Ptr(progId)))
hr, _, _ := procCLSIDFromProgID.Call(lpszProgID, uintptr(unsafe.Pointer(&guid)))
if hr != 0 {
err = NewError(hr)
}
clsid = &guid
return
}
// CLSIDFromString retrieves Class ID from string representation.
//
// This is technically the string version of the GUID and will convert the
// string to object.
//
// CLSIDFromString in Windows API.
func CLSIDFromString(str string) (clsid *GUID, err error) {
var guid GUID
lpsz := uintptr(unsafe.Pointer(syscall.StringToUTF16Ptr(str)))
hr, _, _ := procCLSIDFromString.Call(lpsz, uintptr(unsafe.Pointer(&guid)))
if hr != 0 {
err = NewError(hr)
}
clsid = &guid
return
}
// StringFromCLSID returns GUID formated string from GUID object.
func StringFromCLSID(clsid *GUID) (str string, err error) {
var p *uint16
hr, _, _ := procStringFromCLSID.Call(uintptr(unsafe.Pointer(clsid)), uintptr(unsafe.Pointer(&p)))
if hr != 0 {
err = NewError(hr)
}
str = LpOleStrToString(p)
return
}
// IIDFromString returns GUID from program ID.
func IIDFromString(progId string) (clsid *GUID, err error) {
var guid GUID
lpsz := uintptr(unsafe.Pointer(syscall.StringToUTF16Ptr(progId)))
hr, _, _ := procIIDFromString.Call(lpsz, uintptr(unsafe.Pointer(&guid)))
if hr != 0 {
err = NewError(hr)
}
clsid = &guid
return
}
// StringFromIID returns GUID formatted string from GUID object.
func StringFromIID(iid *GUID) (str string, err error) {
var p *uint16
hr, _, _ := procStringFromIID.Call(uintptr(unsafe.Pointer(iid)), uintptr(unsafe.Pointer(&p)))
if hr != 0 {
err = NewError(hr)
}
str = LpOleStrToString(p)
return
}
// CreateInstance of single uninitialized object with GUID.
func CreateInstance(clsid *GUID, iid *GUID) (unk *IUnknown, err error) {
if iid == nil {
iid = IID_IUnknown
}
hr, _, _ := procCoCreateInstance.Call(
uintptr(unsafe.Pointer(clsid)),
0,
CLSCTX_SERVER,
uintptr(unsafe.Pointer(iid)),
uintptr(unsafe.Pointer(&unk)))
if hr != 0 {
err = NewError(hr)
}
return
}
// GetActiveObject retrieves pointer to active object.
func GetActiveObject(clsid *GUID, iid *GUID) (unk *IUnknown, err error) {
if iid == nil {
iid = IID_IUnknown
}
hr, _, _ := procGetActiveObject.Call(
uintptr(unsafe.Pointer(clsid)),
uintptr(unsafe.Pointer(iid)),
uintptr(unsafe.Pointer(&unk)))
if hr != 0 {
err = NewError(hr)
}
return
}
type BindOpts struct {
CbStruct uint32
GrfFlags uint32
GrfMode uint32
TickCountDeadline uint32
}
// GetObject retrieves pointer to active object.
func GetObject(programID string, bindOpts *BindOpts, iid *GUID) (unk *IUnknown, err error) {
if bindOpts != nil {
bindOpts.CbStruct = uint32(unsafe.Sizeof(BindOpts{}))
}
if iid == nil {
iid = IID_IUnknown
}
hr, _, _ := procCoGetObject.Call(
uintptr(unsafe.Pointer(syscall.StringToUTF16Ptr(programID))),
uintptr(unsafe.Pointer(bindOpts)),
uintptr(unsafe.Pointer(iid)),
uintptr(unsafe.Pointer(&unk)))
if hr != 0 {
err = NewError(hr)
}
return
}
// VariantInit initializes variant.
func VariantInit(v *VARIANT) (err error) {
hr, _, _ := procVariantInit.Call(uintptr(unsafe.Pointer(v)))
if hr != 0 {
err = NewError(hr)
}
return
}
// VariantClear clears value in Variant settings to VT_EMPTY.
func VariantClear(v *VARIANT) (err error) {
hr, _, _ := procVariantClear.Call(uintptr(unsafe.Pointer(v)))
if hr != 0 {
err = NewError(hr)
}
return
}
// SysAllocString allocates memory for string and copies string into memory.
func SysAllocString(v string) (ss *int16) {
pss, _, _ := procSysAllocString.Call(uintptr(unsafe.Pointer(syscall.StringToUTF16Ptr(v))))
ss = (*int16)(unsafe.Pointer(pss))
return
}
// SysAllocStringLen copies up to length of given string returning pointer.
func SysAllocStringLen(v string) (ss *int16) {
utf16 := utf16.Encode([]rune(v + "\x00"))
ptr := &utf16[0]
pss, _, _ := procSysAllocStringLen.Call(uintptr(unsafe.Pointer(ptr)), uintptr(len(utf16)-1))
ss = (*int16)(unsafe.Pointer(pss))
return
}
// SysFreeString frees string system memory. This must be called with SysAllocString.
func SysFreeString(v *int16) (err error) {
hr, _, _ := procSysFreeString.Call(uintptr(unsafe.Pointer(v)))
if hr != 0 {
err = NewError(hr)
}
return
}
// SysStringLen is the length of the system allocated string.
func SysStringLen(v *int16) uint32 {
l, _, _ := procSysStringLen.Call(uintptr(unsafe.Pointer(v)))
return uint32(l)
}
// CreateStdDispatch provides default IDispatch implementation for IUnknown.
//
// This handles default IDispatch implementation for objects. It haves a few
// limitations with only supporting one language. It will also only return
// default exception codes.
func CreateStdDispatch(unk *IUnknown, v uintptr, ptinfo *IUnknown) (disp *IDispatch, err error) {
hr, _, _ := procCreateStdDispatch.Call(
uintptr(unsafe.Pointer(unk)),
v,
uintptr(unsafe.Pointer(ptinfo)),
uintptr(unsafe.Pointer(&disp)))
if hr != 0 {
err = NewError(hr)
}
return
}
// CreateDispTypeInfo provides default ITypeInfo implementation for IDispatch.
//
// This will not handle the full implementation of the interface.
func CreateDispTypeInfo(idata *INTERFACEDATA) (pptinfo *IUnknown, err error) {
hr, _, _ := procCreateDispTypeInfo.Call(
uintptr(unsafe.Pointer(idata)),
uintptr(GetUserDefaultLCID()),
uintptr(unsafe.Pointer(&pptinfo)))
if hr != 0 {
err = NewError(hr)
}
return
}
// copyMemory moves location of a block of memory.
func copyMemory(dest unsafe.Pointer, src unsafe.Pointer, length uint32) {
procCopyMemory.Call(uintptr(dest), uintptr(src), uintptr(length))
}
// GetUserDefaultLCID retrieves current user default locale.
func GetUserDefaultLCID() (lcid uint32) {
ret, _, _ := procGetUserDefaultLCID.Call()
lcid = uint32(ret)
return
}
// GetMessage in message queue from runtime.
//
// This function appears to block. PeekMessage does not block.
func GetMessage(msg *Msg, hwnd uint32, MsgFilterMin uint32, MsgFilterMax uint32) (ret int32, err error) {
r0, _, err := procGetMessageW.Call(uintptr(unsafe.Pointer(msg)), uintptr(hwnd), uintptr(MsgFilterMin), uintptr(MsgFilterMax))
ret = int32(r0)
return
}
// DispatchMessage to window procedure.
func DispatchMessage(msg *Msg) (ret int32) {
r0, _, _ := procDispatchMessageW.Call(uintptr(unsafe.Pointer(msg)))
ret = int32(r0)
return
}

174
vendor/github.com/go-ole/go-ole/com_func.go generated vendored Normal file
View File

@ -0,0 +1,174 @@
// +build !windows
package ole
import (
"time"
"unsafe"
)
// coInitialize initializes COM library on current thread.
//
// MSDN documentation suggests that this function should not be called. Call
// CoInitializeEx() instead. The reason has to do with threading and this
// function is only for single-threaded apartments.
//
// That said, most users of the library have gotten away with just this
// function. If you are experiencing threading issues, then use
// CoInitializeEx().
func coInitialize() error {
return NewError(E_NOTIMPL)
}
// coInitializeEx initializes COM library with concurrency model.
func coInitializeEx(coinit uint32) error {
return NewError(E_NOTIMPL)
}
// CoInitialize initializes COM library on current thread.
//
// MSDN documentation suggests that this function should not be called. Call
// CoInitializeEx() instead. The reason has to do with threading and this
// function is only for single-threaded apartments.
//
// That said, most users of the library have gotten away with just this
// function. If you are experiencing threading issues, then use
// CoInitializeEx().
func CoInitialize(p uintptr) error {
return NewError(E_NOTIMPL)
}
// CoInitializeEx initializes COM library with concurrency model.
func CoInitializeEx(p uintptr, coinit uint32) error {
return NewError(E_NOTIMPL)
}
// CoUninitialize uninitializes COM Library.
func CoUninitialize() {}
// CoTaskMemFree frees memory pointer.
func CoTaskMemFree(memptr uintptr) {}
// CLSIDFromProgID retrieves Class Identifier with the given Program Identifier.
//
// The Programmatic Identifier must be registered, because it will be looked up
// in the Windows Registry. The registry entry has the following keys: CLSID,
// Insertable, Protocol and Shell
// (https://msdn.microsoft.com/en-us/library/dd542719(v=vs.85).aspx).
//
// programID identifies the class id with less precision and is not guaranteed
// to be unique. These are usually found in the registry under
// HKEY_LOCAL_MACHINE\SOFTWARE\Classes, usually with the format of
// "Program.Component.Version" with version being optional.
//
// CLSIDFromProgID in Windows API.
func CLSIDFromProgID(progId string) (*GUID, error) {
return nil, NewError(E_NOTIMPL)
}
// CLSIDFromString retrieves Class ID from string representation.
//
// This is technically the string version of the GUID and will convert the
// string to object.
//
// CLSIDFromString in Windows API.
func CLSIDFromString(str string) (*GUID, error) {
return nil, NewError(E_NOTIMPL)
}
// StringFromCLSID returns GUID formated string from GUID object.
func StringFromCLSID(clsid *GUID) (string, error) {
return "", NewError(E_NOTIMPL)
}
// IIDFromString returns GUID from program ID.
func IIDFromString(progId string) (*GUID, error) {
return nil, NewError(E_NOTIMPL)
}
// StringFromIID returns GUID formatted string from GUID object.
func StringFromIID(iid *GUID) (string, error) {
return "", NewError(E_NOTIMPL)
}
// CreateInstance of single uninitialized object with GUID.
func CreateInstance(clsid *GUID, iid *GUID) (*IUnknown, error) {
return nil, NewError(E_NOTIMPL)
}
// GetActiveObject retrieves pointer to active object.
func GetActiveObject(clsid *GUID, iid *GUID) (*IUnknown, error) {
return nil, NewError(E_NOTIMPL)
}
// VariantInit initializes variant.
func VariantInit(v *VARIANT) error {
return NewError(E_NOTIMPL)
}
// VariantClear clears value in Variant settings to VT_EMPTY.
func VariantClear(v *VARIANT) error {
return NewError(E_NOTIMPL)
}
// SysAllocString allocates memory for string and copies string into memory.
func SysAllocString(v string) *int16 {
u := int16(0)
return &u
}
// SysAllocStringLen copies up to length of given string returning pointer.
func SysAllocStringLen(v string) *int16 {
u := int16(0)
return &u
}
// SysFreeString frees string system memory. This must be called with SysAllocString.
func SysFreeString(v *int16) error {
return NewError(E_NOTIMPL)
}
// SysStringLen is the length of the system allocated string.
func SysStringLen(v *int16) uint32 {
return uint32(0)
}
// CreateStdDispatch provides default IDispatch implementation for IUnknown.
//
// This handles default IDispatch implementation for objects. It haves a few
// limitations with only supporting one language. It will also only return
// default exception codes.
func CreateStdDispatch(unk *IUnknown, v uintptr, ptinfo *IUnknown) (*IDispatch, error) {
return nil, NewError(E_NOTIMPL)
}
// CreateDispTypeInfo provides default ITypeInfo implementation for IDispatch.
//
// This will not handle the full implementation of the interface.
func CreateDispTypeInfo(idata *INTERFACEDATA) (*IUnknown, error) {
return nil, NewError(E_NOTIMPL)
}
// copyMemory moves location of a block of memory.
func copyMemory(dest unsafe.Pointer, src unsafe.Pointer, length uint32) {}
// GetUserDefaultLCID retrieves current user default locale.
func GetUserDefaultLCID() uint32 {
return uint32(0)
}
// GetMessage in message queue from runtime.
//
// This function appears to block. PeekMessage does not block.
func GetMessage(msg *Msg, hwnd uint32, MsgFilterMin uint32, MsgFilterMax uint32) (int32, error) {
return int32(0), NewError(E_NOTIMPL)
}
// DispatchMessage to window procedure.
func DispatchMessage(msg *Msg) int32 {
return int32(0)
}
func GetVariantDate(value uint64) (time.Time, error) {
return time.Now(), NewError(E_NOTIMPL)
}

192
vendor/github.com/go-ole/go-ole/connect.go generated vendored Normal file
View File

@ -0,0 +1,192 @@
package ole
// Connection contains IUnknown for fluent interface interaction.
//
// Deprecated. Use oleutil package instead.
type Connection struct {
Object *IUnknown // Access COM
}
// Initialize COM.
func (*Connection) Initialize() (err error) {
return coInitialize()
}
// Uninitialize COM.
func (*Connection) Uninitialize() {
CoUninitialize()
}
// Create IUnknown object based first on ProgId and then from String.
func (c *Connection) Create(progId string) (err error) {
var clsid *GUID
clsid, err = CLSIDFromProgID(progId)
if err != nil {
clsid, err = CLSIDFromString(progId)
if err != nil {
return
}
}
unknown, err := CreateInstance(clsid, IID_IUnknown)
if err != nil {
return
}
c.Object = unknown
return
}
// Release IUnknown object.
func (c *Connection) Release() {
c.Object.Release()
}
// Load COM object from list of programIDs or strings.
func (c *Connection) Load(names ...string) (errors []error) {
var tempErrors []error = make([]error, len(names))
var numErrors int = 0
for _, name := range names {
err := c.Create(name)
if err != nil {
tempErrors = append(tempErrors, err)
numErrors += 1
continue
}
break
}
copy(errors, tempErrors[0:numErrors])
return
}
// Dispatch returns Dispatch object.
func (c *Connection) Dispatch() (object *Dispatch, err error) {
dispatch, err := c.Object.QueryInterface(IID_IDispatch)
if err != nil {
return
}
object = &Dispatch{dispatch}
return
}
// Dispatch stores IDispatch object.
type Dispatch struct {
Object *IDispatch // Dispatch object.
}
// Call method on IDispatch with parameters.
func (d *Dispatch) Call(method string, params ...interface{}) (result *VARIANT, err error) {
id, err := d.GetId(method)
if err != nil {
return
}
result, err = d.Invoke(id, DISPATCH_METHOD, params)
return
}
// MustCall method on IDispatch with parameters.
func (d *Dispatch) MustCall(method string, params ...interface{}) (result *VARIANT) {
id, err := d.GetId(method)
if err != nil {
panic(err)
}
result, err = d.Invoke(id, DISPATCH_METHOD, params)
if err != nil {
panic(err)
}
return
}
// Get property on IDispatch with parameters.
func (d *Dispatch) Get(name string, params ...interface{}) (result *VARIANT, err error) {
id, err := d.GetId(name)
if err != nil {
return
}
result, err = d.Invoke(id, DISPATCH_PROPERTYGET, params)
return
}
// MustGet property on IDispatch with parameters.
func (d *Dispatch) MustGet(name string, params ...interface{}) (result *VARIANT) {
id, err := d.GetId(name)
if err != nil {
panic(err)
}
result, err = d.Invoke(id, DISPATCH_PROPERTYGET, params)
if err != nil {
panic(err)
}
return
}
// Set property on IDispatch with parameters.
func (d *Dispatch) Set(name string, params ...interface{}) (result *VARIANT, err error) {
id, err := d.GetId(name)
if err != nil {
return
}
result, err = d.Invoke(id, DISPATCH_PROPERTYPUT, params)
return
}
// MustSet property on IDispatch with parameters.
func (d *Dispatch) MustSet(name string, params ...interface{}) (result *VARIANT) {
id, err := d.GetId(name)
if err != nil {
panic(err)
}
result, err = d.Invoke(id, DISPATCH_PROPERTYPUT, params)
if err != nil {
panic(err)
}
return
}
// GetId retrieves ID of name on IDispatch.
func (d *Dispatch) GetId(name string) (id int32, err error) {
var dispid []int32
dispid, err = d.Object.GetIDsOfName([]string{name})
if err != nil {
return
}
id = dispid[0]
return
}
// GetIds retrieves all IDs of names on IDispatch.
func (d *Dispatch) GetIds(names ...string) (dispid []int32, err error) {
dispid, err = d.Object.GetIDsOfName(names)
return
}
// Invoke IDispatch on DisplayID of dispatch type with parameters.
//
// There have been problems where if send cascading params..., it would error
// out because the parameters would be empty.
func (d *Dispatch) Invoke(id int32, dispatch int16, params []interface{}) (result *VARIANT, err error) {
if len(params) < 1 {
result, err = d.Object.Invoke(id, dispatch)
} else {
result, err = d.Object.Invoke(id, dispatch, params...)
}
return
}
// Release IDispatch object.
func (d *Dispatch) Release() {
d.Object.Release()
}
// Connect initializes COM and attempts to load IUnknown based on given names.
func Connect(names ...string) (connection *Connection) {
connection.Initialize()
connection.Load(names...)
return
}

153
vendor/github.com/go-ole/go-ole/constants.go generated vendored Normal file
View File

@ -0,0 +1,153 @@
package ole
const (
CLSCTX_INPROC_SERVER = 1
CLSCTX_INPROC_HANDLER = 2
CLSCTX_LOCAL_SERVER = 4
CLSCTX_INPROC_SERVER16 = 8
CLSCTX_REMOTE_SERVER = 16
CLSCTX_ALL = CLSCTX_INPROC_SERVER | CLSCTX_INPROC_HANDLER | CLSCTX_LOCAL_SERVER
CLSCTX_INPROC = CLSCTX_INPROC_SERVER | CLSCTX_INPROC_HANDLER
CLSCTX_SERVER = CLSCTX_INPROC_SERVER | CLSCTX_LOCAL_SERVER | CLSCTX_REMOTE_SERVER
)
const (
COINIT_APARTMENTTHREADED = 0x2
COINIT_MULTITHREADED = 0x0
COINIT_DISABLE_OLE1DDE = 0x4
COINIT_SPEED_OVER_MEMORY = 0x8
)
const (
DISPATCH_METHOD = 1
DISPATCH_PROPERTYGET = 2
DISPATCH_PROPERTYPUT = 4
DISPATCH_PROPERTYPUTREF = 8
)
const (
S_OK = 0x00000000
E_UNEXPECTED = 0x8000FFFF
E_NOTIMPL = 0x80004001
E_OUTOFMEMORY = 0x8007000E
E_INVALIDARG = 0x80070057
E_NOINTERFACE = 0x80004002
E_POINTER = 0x80004003
E_HANDLE = 0x80070006
E_ABORT = 0x80004004
E_FAIL = 0x80004005
E_ACCESSDENIED = 0x80070005
E_PENDING = 0x8000000A
CO_E_CLASSSTRING = 0x800401F3
)
const (
CC_FASTCALL = iota
CC_CDECL
CC_MSCPASCAL
CC_PASCAL = CC_MSCPASCAL
CC_MACPASCAL
CC_STDCALL
CC_FPFASTCALL
CC_SYSCALL
CC_MPWCDECL
CC_MPWPASCAL
CC_MAX = CC_MPWPASCAL
)
type VT uint16
const (
VT_EMPTY VT = 0x0
VT_NULL VT = 0x1
VT_I2 VT = 0x2
VT_I4 VT = 0x3
VT_R4 VT = 0x4
VT_R8 VT = 0x5
VT_CY VT = 0x6
VT_DATE VT = 0x7
VT_BSTR VT = 0x8
VT_DISPATCH VT = 0x9
VT_ERROR VT = 0xa
VT_BOOL VT = 0xb
VT_VARIANT VT = 0xc
VT_UNKNOWN VT = 0xd
VT_DECIMAL VT = 0xe
VT_I1 VT = 0x10
VT_UI1 VT = 0x11
VT_UI2 VT = 0x12
VT_UI4 VT = 0x13
VT_I8 VT = 0x14
VT_UI8 VT = 0x15
VT_INT VT = 0x16
VT_UINT VT = 0x17
VT_VOID VT = 0x18
VT_HRESULT VT = 0x19
VT_PTR VT = 0x1a
VT_SAFEARRAY VT = 0x1b
VT_CARRAY VT = 0x1c
VT_USERDEFINED VT = 0x1d
VT_LPSTR VT = 0x1e
VT_LPWSTR VT = 0x1f
VT_RECORD VT = 0x24
VT_INT_PTR VT = 0x25
VT_UINT_PTR VT = 0x26
VT_FILETIME VT = 0x40
VT_BLOB VT = 0x41
VT_STREAM VT = 0x42
VT_STORAGE VT = 0x43
VT_STREAMED_OBJECT VT = 0x44
VT_STORED_OBJECT VT = 0x45
VT_BLOB_OBJECT VT = 0x46
VT_CF VT = 0x47
VT_CLSID VT = 0x48
VT_BSTR_BLOB VT = 0xfff
VT_VECTOR VT = 0x1000
VT_ARRAY VT = 0x2000
VT_BYREF VT = 0x4000
VT_RESERVED VT = 0x8000
VT_ILLEGAL VT = 0xffff
VT_ILLEGALMASKED VT = 0xfff
VT_TYPEMASK VT = 0xfff
)
const (
DISPID_UNKNOWN = -1
DISPID_VALUE = 0
DISPID_PROPERTYPUT = -3
DISPID_NEWENUM = -4
DISPID_EVALUATE = -5
DISPID_CONSTRUCTOR = -6
DISPID_DESTRUCTOR = -7
DISPID_COLLECT = -8
)
const (
TKIND_ENUM = 1
TKIND_RECORD = 2
TKIND_MODULE = 3
TKIND_INTERFACE = 4
TKIND_DISPATCH = 5
TKIND_COCLASS = 6
TKIND_ALIAS = 7
TKIND_UNION = 8
TKIND_MAX = 9
)
// Safe Array Feature Flags
const (
FADF_AUTO = 0x0001
FADF_STATIC = 0x0002
FADF_EMBEDDED = 0x0004
FADF_FIXEDSIZE = 0x0010
FADF_RECORD = 0x0020
FADF_HAVEIID = 0x0040
FADF_HAVEVARTYPE = 0x0080
FADF_BSTR = 0x0100
FADF_UNKNOWN = 0x0200
FADF_DISPATCH = 0x0400
FADF_VARIANT = 0x0800
FADF_RESERVED = 0xF008
)

51
vendor/github.com/go-ole/go-ole/error.go generated vendored Normal file
View File

@ -0,0 +1,51 @@
package ole
// OleError stores COM errors.
type OleError struct {
hr uintptr
description string
subError error
}
// NewError creates new error with HResult.
func NewError(hr uintptr) *OleError {
return &OleError{hr: hr}
}
// NewErrorWithDescription creates new COM error with HResult and description.
func NewErrorWithDescription(hr uintptr, description string) *OleError {
return &OleError{hr: hr, description: description}
}
// NewErrorWithSubError creates new COM error with parent error.
func NewErrorWithSubError(hr uintptr, description string, err error) *OleError {
return &OleError{hr: hr, description: description, subError: err}
}
// Code is the HResult.
func (v *OleError) Code() uintptr {
return uintptr(v.hr)
}
// String description, either manually set or format message with error code.
func (v *OleError) String() string {
if v.description != "" {
return errstr(int(v.hr)) + " (" + v.description + ")"
}
return errstr(int(v.hr))
}
// Error implements error interface.
func (v *OleError) Error() string {
return v.String()
}
// Description retrieves error summary, if there is one.
func (v *OleError) Description() string {
return v.description
}
// SubError returns parent error, if there is one.
func (v *OleError) SubError() error {
return v.subError
}

8
vendor/github.com/go-ole/go-ole/error_func.go generated vendored Normal file
View File

@ -0,0 +1,8 @@
// +build !windows
package ole
// errstr converts error code to string.
func errstr(errno int) string {
return ""
}

24
vendor/github.com/go-ole/go-ole/error_windows.go generated vendored Normal file
View File

@ -0,0 +1,24 @@
// +build windows
package ole
import (
"fmt"
"syscall"
"unicode/utf16"
)
// errstr converts error code to string.
func errstr(errno int) string {
// ask windows for the remaining errors
var flags uint32 = syscall.FORMAT_MESSAGE_FROM_SYSTEM | syscall.FORMAT_MESSAGE_ARGUMENT_ARRAY | syscall.FORMAT_MESSAGE_IGNORE_INSERTS
b := make([]uint16, 300)
n, err := syscall.FormatMessage(flags, 0, uint32(errno), 0, b, nil)
if err != nil {
return fmt.Sprintf("error %d (FormatMessage failed with: %v)", errno, err)
}
// trim terminating \r and \n
for ; n > 0 && (b[n-1] == '\n' || b[n-1] == '\r'); n-- {
}
return string(utf16.Decode(b[:n]))
}

3
vendor/github.com/go-ole/go-ole/go.mod generated vendored Normal file
View File

@ -0,0 +1,3 @@
module github.com/go-ole/go-ole
go 1.12

284
vendor/github.com/go-ole/go-ole/guid.go generated vendored Normal file
View File

@ -0,0 +1,284 @@
package ole
var (
// IID_NULL is null Interface ID, used when no other Interface ID is known.
IID_NULL = NewGUID("{00000000-0000-0000-0000-000000000000}")
// IID_IUnknown is for IUnknown interfaces.
IID_IUnknown = NewGUID("{00000000-0000-0000-C000-000000000046}")
// IID_IDispatch is for IDispatch interfaces.
IID_IDispatch = NewGUID("{00020400-0000-0000-C000-000000000046}")
// IID_IEnumVariant is for IEnumVariant interfaces
IID_IEnumVariant = NewGUID("{00020404-0000-0000-C000-000000000046}")
// IID_IConnectionPointContainer is for IConnectionPointContainer interfaces.
IID_IConnectionPointContainer = NewGUID("{B196B284-BAB4-101A-B69C-00AA00341D07}")
// IID_IConnectionPoint is for IConnectionPoint interfaces.
IID_IConnectionPoint = NewGUID("{B196B286-BAB4-101A-B69C-00AA00341D07}")
// IID_IInspectable is for IInspectable interfaces.
IID_IInspectable = NewGUID("{AF86E2E0-B12D-4C6A-9C5A-D7AA65101E90}")
// IID_IProvideClassInfo is for IProvideClassInfo interfaces.
IID_IProvideClassInfo = NewGUID("{B196B283-BAB4-101A-B69C-00AA00341D07}")
)
// These are for testing and not part of any library.
var (
// IID_ICOMTestString is for ICOMTestString interfaces.
//
// {E0133EB4-C36F-469A-9D3D-C66B84BE19ED}
IID_ICOMTestString = NewGUID("{E0133EB4-C36F-469A-9D3D-C66B84BE19ED}")
// IID_ICOMTestInt8 is for ICOMTestInt8 interfaces.
//
// {BEB06610-EB84-4155-AF58-E2BFF53680B4}
IID_ICOMTestInt8 = NewGUID("{BEB06610-EB84-4155-AF58-E2BFF53680B4}")
// IID_ICOMTestInt16 is for ICOMTestInt16 interfaces.
//
// {DAA3F9FA-761E-4976-A860-8364CE55F6FC}
IID_ICOMTestInt16 = NewGUID("{DAA3F9FA-761E-4976-A860-8364CE55F6FC}")
// IID_ICOMTestInt32 is for ICOMTestInt32 interfaces.
//
// {E3DEDEE7-38A2-4540-91D1-2EEF1D8891B0}
IID_ICOMTestInt32 = NewGUID("{E3DEDEE7-38A2-4540-91D1-2EEF1D8891B0}")
// IID_ICOMTestInt64 is for ICOMTestInt64 interfaces.
//
// {8D437CBC-B3ED-485C-BC32-C336432A1623}
IID_ICOMTestInt64 = NewGUID("{8D437CBC-B3ED-485C-BC32-C336432A1623}")
// IID_ICOMTestFloat is for ICOMTestFloat interfaces.
//
// {BF1ED004-EA02-456A-AA55-2AC8AC6B054C}
IID_ICOMTestFloat = NewGUID("{BF1ED004-EA02-456A-AA55-2AC8AC6B054C}")
// IID_ICOMTestDouble is for ICOMTestDouble interfaces.
//
// {BF908A81-8687-4E93-999F-D86FAB284BA0}
IID_ICOMTestDouble = NewGUID("{BF908A81-8687-4E93-999F-D86FAB284BA0}")
// IID_ICOMTestBoolean is for ICOMTestBoolean interfaces.
//
// {D530E7A6-4EE8-40D1-8931-3D63B8605010}
IID_ICOMTestBoolean = NewGUID("{D530E7A6-4EE8-40D1-8931-3D63B8605010}")
// IID_ICOMEchoTestObject is for ICOMEchoTestObject interfaces.
//
// {6485B1EF-D780-4834-A4FE-1EBB51746CA3}
IID_ICOMEchoTestObject = NewGUID("{6485B1EF-D780-4834-A4FE-1EBB51746CA3}")
// IID_ICOMTestTypes is for ICOMTestTypes interfaces.
//
// {CCA8D7AE-91C0-4277-A8B3-FF4EDF28D3C0}
IID_ICOMTestTypes = NewGUID("{CCA8D7AE-91C0-4277-A8B3-FF4EDF28D3C0}")
// CLSID_COMEchoTestObject is for COMEchoTestObject class.
//
// {3C24506A-AE9E-4D50-9157-EF317281F1B0}
CLSID_COMEchoTestObject = NewGUID("{3C24506A-AE9E-4D50-9157-EF317281F1B0}")
// CLSID_COMTestScalarClass is for COMTestScalarClass class.
//
// {865B85C5-0334-4AC6-9EF6-AACEC8FC5E86}
CLSID_COMTestScalarClass = NewGUID("{865B85C5-0334-4AC6-9EF6-AACEC8FC5E86}")
)
const hextable = "0123456789ABCDEF"
const emptyGUID = "{00000000-0000-0000-0000-000000000000}"
// GUID is Windows API specific GUID type.
//
// This exists to match Windows GUID type for direct passing for COM.
// Format is in xxxxxxxx-xxxx-xxxx-xxxxxxxxxxxxxxxx.
type GUID struct {
Data1 uint32
Data2 uint16
Data3 uint16
Data4 [8]byte
}
// NewGUID converts the given string into a globally unique identifier that is
// compliant with the Windows API.
//
// The supplied string may be in any of these formats:
//
// XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
// XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
// {XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}
//
// The conversion of the supplied string is not case-sensitive.
func NewGUID(guid string) *GUID {
d := []byte(guid)
var d1, d2, d3, d4a, d4b []byte
switch len(d) {
case 38:
if d[0] != '{' || d[37] != '}' {
return nil
}
d = d[1:37]
fallthrough
case 36:
if d[8] != '-' || d[13] != '-' || d[18] != '-' || d[23] != '-' {
return nil
}
d1 = d[0:8]
d2 = d[9:13]
d3 = d[14:18]
d4a = d[19:23]
d4b = d[24:36]
case 32:
d1 = d[0:8]
d2 = d[8:12]
d3 = d[12:16]
d4a = d[16:20]
d4b = d[20:32]
default:
return nil
}
var g GUID
var ok1, ok2, ok3, ok4 bool
g.Data1, ok1 = decodeHexUint32(d1)
g.Data2, ok2 = decodeHexUint16(d2)
g.Data3, ok3 = decodeHexUint16(d3)
g.Data4, ok4 = decodeHexByte64(d4a, d4b)
if ok1 && ok2 && ok3 && ok4 {
return &g
}
return nil
}
func decodeHexUint32(src []byte) (value uint32, ok bool) {
var b1, b2, b3, b4 byte
var ok1, ok2, ok3, ok4 bool
b1, ok1 = decodeHexByte(src[0], src[1])
b2, ok2 = decodeHexByte(src[2], src[3])
b3, ok3 = decodeHexByte(src[4], src[5])
b4, ok4 = decodeHexByte(src[6], src[7])
value = (uint32(b1) << 24) | (uint32(b2) << 16) | (uint32(b3) << 8) | uint32(b4)
ok = ok1 && ok2 && ok3 && ok4
return
}
func decodeHexUint16(src []byte) (value uint16, ok bool) {
var b1, b2 byte
var ok1, ok2 bool
b1, ok1 = decodeHexByte(src[0], src[1])
b2, ok2 = decodeHexByte(src[2], src[3])
value = (uint16(b1) << 8) | uint16(b2)
ok = ok1 && ok2
return
}
func decodeHexByte64(s1 []byte, s2 []byte) (value [8]byte, ok bool) {
var ok1, ok2, ok3, ok4, ok5, ok6, ok7, ok8 bool
value[0], ok1 = decodeHexByte(s1[0], s1[1])
value[1], ok2 = decodeHexByte(s1[2], s1[3])
value[2], ok3 = decodeHexByte(s2[0], s2[1])
value[3], ok4 = decodeHexByte(s2[2], s2[3])
value[4], ok5 = decodeHexByte(s2[4], s2[5])
value[5], ok6 = decodeHexByte(s2[6], s2[7])
value[6], ok7 = decodeHexByte(s2[8], s2[9])
value[7], ok8 = decodeHexByte(s2[10], s2[11])
ok = ok1 && ok2 && ok3 && ok4 && ok5 && ok6 && ok7 && ok8
return
}
func decodeHexByte(c1, c2 byte) (value byte, ok bool) {
var n1, n2 byte
var ok1, ok2 bool
n1, ok1 = decodeHexChar(c1)
n2, ok2 = decodeHexChar(c2)
value = (n1 << 4) | n2
ok = ok1 && ok2
return
}
func decodeHexChar(c byte) (byte, bool) {
switch {
case '0' <= c && c <= '9':
return c - '0', true
case 'a' <= c && c <= 'f':
return c - 'a' + 10, true
case 'A' <= c && c <= 'F':
return c - 'A' + 10, true
}
return 0, false
}
// String converts the GUID to string form. It will adhere to this pattern:
//
// {XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}
//
// If the GUID is nil, the string representation of an empty GUID is returned:
//
// {00000000-0000-0000-0000-000000000000}
func (guid *GUID) String() string {
if guid == nil {
return emptyGUID
}
var c [38]byte
c[0] = '{'
putUint32Hex(c[1:9], guid.Data1)
c[9] = '-'
putUint16Hex(c[10:14], guid.Data2)
c[14] = '-'
putUint16Hex(c[15:19], guid.Data3)
c[19] = '-'
putByteHex(c[20:24], guid.Data4[0:2])
c[24] = '-'
putByteHex(c[25:37], guid.Data4[2:8])
c[37] = '}'
return string(c[:])
}
func putUint32Hex(b []byte, v uint32) {
b[0] = hextable[byte(v>>24)>>4]
b[1] = hextable[byte(v>>24)&0x0f]
b[2] = hextable[byte(v>>16)>>4]
b[3] = hextable[byte(v>>16)&0x0f]
b[4] = hextable[byte(v>>8)>>4]
b[5] = hextable[byte(v>>8)&0x0f]
b[6] = hextable[byte(v)>>4]
b[7] = hextable[byte(v)&0x0f]
}
func putUint16Hex(b []byte, v uint16) {
b[0] = hextable[byte(v>>8)>>4]
b[1] = hextable[byte(v>>8)&0x0f]
b[2] = hextable[byte(v)>>4]
b[3] = hextable[byte(v)&0x0f]
}
func putByteHex(dst, src []byte) {
for i := 0; i < len(src); i++ {
dst[i*2] = hextable[src[i]>>4]
dst[i*2+1] = hextable[src[i]&0x0f]
}
}
// IsEqualGUID compares two GUID.
//
// Not constant time comparison.
func IsEqualGUID(guid1 *GUID, guid2 *GUID) bool {
return guid1.Data1 == guid2.Data1 &&
guid1.Data2 == guid2.Data2 &&
guid1.Data3 == guid2.Data3 &&
guid1.Data4[0] == guid2.Data4[0] &&
guid1.Data4[1] == guid2.Data4[1] &&
guid1.Data4[2] == guid2.Data4[2] &&
guid1.Data4[3] == guid2.Data4[3] &&
guid1.Data4[4] == guid2.Data4[4] &&
guid1.Data4[5] == guid2.Data4[5] &&
guid1.Data4[6] == guid2.Data4[6] &&
guid1.Data4[7] == guid2.Data4[7]
}

20
vendor/github.com/go-ole/go-ole/iconnectionpoint.go generated vendored Normal file
View File

@ -0,0 +1,20 @@
package ole
import "unsafe"
type IConnectionPoint struct {
IUnknown
}
type IConnectionPointVtbl struct {
IUnknownVtbl
GetConnectionInterface uintptr
GetConnectionPointContainer uintptr
Advise uintptr
Unadvise uintptr
EnumConnections uintptr
}
func (v *IConnectionPoint) VTable() *IConnectionPointVtbl {
return (*IConnectionPointVtbl)(unsafe.Pointer(v.RawVTable))
}

View File

@ -0,0 +1,21 @@
// +build !windows
package ole
import "unsafe"
func (v *IConnectionPoint) GetConnectionInterface(piid **GUID) int32 {
return int32(0)
}
func (v *IConnectionPoint) Advise(unknown *IUnknown) (uint32, error) {
return uint32(0), NewError(E_NOTIMPL)
}
func (v *IConnectionPoint) Unadvise(cookie uint32) error {
return NewError(E_NOTIMPL)
}
func (v *IConnectionPoint) EnumConnections(p *unsafe.Pointer) (err error) {
return NewError(E_NOTIMPL)
}

View File

@ -0,0 +1,43 @@
// +build windows
package ole
import (
"syscall"
"unsafe"
)
func (v *IConnectionPoint) GetConnectionInterface(piid **GUID) int32 {
// XXX: This doesn't look like it does what it's supposed to
return release((*IUnknown)(unsafe.Pointer(v)))
}
func (v *IConnectionPoint) Advise(unknown *IUnknown) (cookie uint32, err error) {
hr, _, _ := syscall.Syscall(
v.VTable().Advise,
3,
uintptr(unsafe.Pointer(v)),
uintptr(unsafe.Pointer(unknown)),
uintptr(unsafe.Pointer(&cookie)))
if hr != 0 {
err = NewError(hr)
}
return
}
func (v *IConnectionPoint) Unadvise(cookie uint32) (err error) {
hr, _, _ := syscall.Syscall(
v.VTable().Unadvise,
2,
uintptr(unsafe.Pointer(v)),
uintptr(cookie),
0)
if hr != 0 {
err = NewError(hr)
}
return
}
func (v *IConnectionPoint) EnumConnections(p *unsafe.Pointer) error {
return NewError(E_NOTIMPL)
}

View File

@ -0,0 +1,17 @@
package ole
import "unsafe"
type IConnectionPointContainer struct {
IUnknown
}
type IConnectionPointContainerVtbl struct {
IUnknownVtbl
EnumConnectionPoints uintptr
FindConnectionPoint uintptr
}
func (v *IConnectionPointContainer) VTable() *IConnectionPointContainerVtbl {
return (*IConnectionPointContainerVtbl)(unsafe.Pointer(v.RawVTable))
}

View File

@ -0,0 +1,11 @@
// +build !windows
package ole
func (v *IConnectionPointContainer) EnumConnectionPoints(points interface{}) error {
return NewError(E_NOTIMPL)
}
func (v *IConnectionPointContainer) FindConnectionPoint(iid *GUID, point **IConnectionPoint) error {
return NewError(E_NOTIMPL)
}

View File

@ -0,0 +1,25 @@
// +build windows
package ole
import (
"syscall"
"unsafe"
)
func (v *IConnectionPointContainer) EnumConnectionPoints(points interface{}) error {
return NewError(E_NOTIMPL)
}
func (v *IConnectionPointContainer) FindConnectionPoint(iid *GUID, point **IConnectionPoint) (err error) {
hr, _, _ := syscall.Syscall(
v.VTable().FindConnectionPoint,
3,
uintptr(unsafe.Pointer(v)),
uintptr(unsafe.Pointer(iid)),
uintptr(unsafe.Pointer(point)))
if hr != 0 {
err = NewError(hr)
}
return
}

Some files were not shown because too many files have changed in this diff Show More